Jan 28 15:45:26 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 15:45:26 crc restorecon[4739]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:26 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:45:27 crc restorecon[4739]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 15:45:28 crc kubenswrapper[4903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:45:28 crc kubenswrapper[4903]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 15:45:28 crc kubenswrapper[4903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:45:28 crc kubenswrapper[4903]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:45:28 crc kubenswrapper[4903]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 15:45:28 crc kubenswrapper[4903]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.189640 4903 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195090 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195118 4903 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195123 4903 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195127 4903 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195131 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195135 4903 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195140 4903 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195143 4903 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195147 4903 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195151 4903 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195156 4903 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195159 4903 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195165 4903 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195171 4903 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195184 4903 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195188 4903 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195192 4903 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195196 4903 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195200 4903 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195204 4903 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195207 4903 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195211 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195214 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195218 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195221 4903 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195225 4903 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195228 4903 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195232 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195236 4903 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195241 4903 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195246 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195249 4903 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195253 4903 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195257 4903 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195261 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195264 4903 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195268 4903 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195271 4903 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195276 4903 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195280 4903 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195285 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195289 4903 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195294 4903 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195297 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195302 4903 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195307 4903 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195311 4903 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195314 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195318 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195322 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195326 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195330 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195333 4903 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195336 4903 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195340 4903 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195343 4903 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195347 4903 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195350 4903 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195353 4903 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195358 4903 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195362 4903 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195366 4903 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195370 4903 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195375 4903 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195379 4903 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195383 4903 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195387 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195394 4903 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195399 4903 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195403 4903 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.195408 4903 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195522 4903 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195559 4903 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195570 4903 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195576 4903 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195583 4903 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195589 4903 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195596 4903 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195602 4903 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195606 4903 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195611 4903 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195617 4903 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195622 4903 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195627 4903 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195631 4903 flags.go:64] FLAG: --cgroup-root="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195635 4903 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195639 4903 flags.go:64] FLAG: --client-ca-file="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195644 4903 flags.go:64] FLAG: --cloud-config="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195648 4903 flags.go:64] FLAG: --cloud-provider="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195654 4903 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195659 4903 flags.go:64] FLAG: --cluster-domain="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195664 4903 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195669 4903 flags.go:64] FLAG: --config-dir="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195673 4903 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195677 4903 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195683 4903 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195688 4903 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195692 4903 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195696 4903 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195701 4903 flags.go:64] FLAG: --contention-profiling="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195705 4903 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195709 4903 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195714 4903 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195718 4903 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195724 4903 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195729 4903 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195733 4903 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195737 4903 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195742 4903 flags.go:64] FLAG: --enable-server="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195746 4903 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195752 4903 flags.go:64] FLAG: --event-burst="100" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195756 4903 flags.go:64] FLAG: --event-qps="50" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195761 4903 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195765 4903 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195769 4903 flags.go:64] FLAG: --eviction-hard="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195775 4903 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195779 4903 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195784 4903 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195788 4903 flags.go:64] FLAG: --eviction-soft="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195792 4903 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195796 4903 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195800 4903 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195805 4903 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195809 4903 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195813 4903 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195817 4903 flags.go:64] FLAG: --feature-gates="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195822 4903 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195826 4903 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195831 4903 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195835 4903 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195839 4903 flags.go:64] FLAG: --healthz-port="10248" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195843 4903 flags.go:64] FLAG: --help="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195848 4903 flags.go:64] FLAG: --hostname-override="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195852 4903 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195856 4903 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195860 4903 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195865 4903 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195870 4903 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195874 4903 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195878 4903 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195882 4903 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195886 4903 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195890 4903 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195894 4903 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195899 4903 flags.go:64] FLAG: --kube-reserved="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195903 4903 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195907 4903 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195911 4903 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195915 4903 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195919 4903 flags.go:64] FLAG: --lock-file="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195923 4903 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195927 4903 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195932 4903 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195939 4903 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195944 4903 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195948 4903 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195953 4903 flags.go:64] FLAG: --logging-format="text" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195957 4903 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195961 4903 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195965 4903 flags.go:64] FLAG: --manifest-url="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195970 4903 flags.go:64] FLAG: --manifest-url-header="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195976 4903 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195980 4903 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195985 4903 flags.go:64] FLAG: --max-pods="110" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195989 4903 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195994 4903 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.195998 4903 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196002 4903 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196007 4903 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196011 4903 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196015 4903 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196027 4903 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196032 4903 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196036 4903 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196040 4903 flags.go:64] FLAG: --pod-cidr="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196044 4903 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196052 4903 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196056 4903 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196060 4903 flags.go:64] FLAG: --pods-per-core="0" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196064 4903 flags.go:64] FLAG: --port="10250" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196069 4903 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196073 4903 flags.go:64] FLAG: --provider-id="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196077 4903 flags.go:64] FLAG: --qos-reserved="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196080 4903 flags.go:64] FLAG: --read-only-port="10255" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196085 4903 flags.go:64] FLAG: --register-node="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196089 4903 flags.go:64] FLAG: --register-schedulable="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196093 4903 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196101 4903 flags.go:64] FLAG: --registry-burst="10" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196106 4903 flags.go:64] FLAG: --registry-qps="5" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196110 4903 flags.go:64] FLAG: --reserved-cpus="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196115 4903 flags.go:64] FLAG: --reserved-memory="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196121 4903 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196126 4903 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196130 4903 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196134 4903 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196138 4903 flags.go:64] FLAG: --runonce="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196142 4903 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196147 4903 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196151 4903 flags.go:64] FLAG: --seccomp-default="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196155 4903 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196159 4903 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196164 4903 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196168 4903 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196172 4903 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196177 4903 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196181 4903 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196185 4903 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196189 4903 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196193 4903 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196197 4903 flags.go:64] FLAG: --system-cgroups="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196201 4903 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196208 4903 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196212 4903 flags.go:64] FLAG: --tls-cert-file="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196216 4903 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196222 4903 flags.go:64] FLAG: --tls-min-version="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196226 4903 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196230 4903 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196234 4903 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196239 4903 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196243 4903 flags.go:64] FLAG: --v="2" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196249 4903 flags.go:64] FLAG: --version="false" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196255 4903 flags.go:64] FLAG: --vmodule="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196261 4903 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196265 4903 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196366 4903 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196373 4903 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196376 4903 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196380 4903 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196384 4903 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196387 4903 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196391 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196394 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196398 4903 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196401 4903 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196405 4903 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196409 4903 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196413 4903 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196417 4903 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196421 4903 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196424 4903 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196428 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196431 4903 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196435 4903 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196438 4903 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196442 4903 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196445 4903 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196453 4903 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196456 4903 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196461 4903 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196466 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196469 4903 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196475 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196479 4903 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196483 4903 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196486 4903 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196490 4903 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196493 4903 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196497 4903 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196502 4903 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196506 4903 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196509 4903 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196513 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196517 4903 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196520 4903 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196541 4903 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196545 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196550 4903 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196555 4903 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196559 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196563 4903 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196567 4903 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196570 4903 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196574 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196578 4903 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196582 4903 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196585 4903 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196588 4903 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196592 4903 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196597 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196601 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196604 4903 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196608 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196611 4903 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196615 4903 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196620 4903 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196624 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196628 4903 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196632 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196636 4903 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196640 4903 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196643 4903 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196647 4903 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196651 4903 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196654 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.196658 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.196670 4903 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.208089 4903 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.208133 4903 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208240 4903 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208248 4903 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208253 4903 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208257 4903 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208261 4903 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208265 4903 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208268 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208273 4903 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208278 4903 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208282 4903 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208286 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208289 4903 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208293 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208297 4903 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208301 4903 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208305 4903 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208309 4903 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208313 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208317 4903 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208322 4903 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208328 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208332 4903 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208337 4903 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208342 4903 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208348 4903 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208357 4903 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208365 4903 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208370 4903 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208377 4903 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208385 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208391 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208396 4903 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208400 4903 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208404 4903 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208408 4903 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208413 4903 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208417 4903 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208422 4903 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208427 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208437 4903 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208443 4903 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208447 4903 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208452 4903 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208457 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208463 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208468 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208474 4903 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208478 4903 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208483 4903 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208488 4903 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208493 4903 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208497 4903 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208502 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208506 4903 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208510 4903 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208513 4903 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208517 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208520 4903 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208524 4903 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208553 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208560 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208565 4903 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208568 4903 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208572 4903 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208578 4903 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208588 4903 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208596 4903 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208601 4903 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208606 4903 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208611 4903 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208616 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.208625 4903 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208776 4903 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208788 4903 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208795 4903 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208800 4903 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208806 4903 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208811 4903 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208817 4903 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208823 4903 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208828 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208833 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208838 4903 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208843 4903 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208847 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208852 4903 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208857 4903 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208863 4903 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208867 4903 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208871 4903 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208876 4903 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208880 4903 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208886 4903 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208891 4903 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208896 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208900 4903 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208905 4903 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208910 4903 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208915 4903 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208919 4903 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208924 4903 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208929 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208934 4903 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208939 4903 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208943 4903 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208948 4903 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208952 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208957 4903 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208963 4903 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208968 4903 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208973 4903 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208978 4903 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208983 4903 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208988 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208993 4903 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.208998 4903 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209004 4903 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209011 4903 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209016 4903 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209021 4903 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209025 4903 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209030 4903 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209034 4903 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209039 4903 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209043 4903 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209048 4903 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209052 4903 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209056 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209061 4903 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209066 4903 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209070 4903 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209075 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209081 4903 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209087 4903 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209093 4903 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209098 4903 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209103 4903 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209108 4903 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209112 4903 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209116 4903 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209121 4903 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209125 4903 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.209130 4903 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.209137 4903 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.210404 4903 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.214137 4903 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.215161 4903 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.216590 4903 server.go:997] "Starting client certificate rotation" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.216624 4903 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.216790 4903 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-25 11:40:55.364117148 +0000 UTC Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.216875 4903 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.249549 4903 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.253040 4903 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.254178 4903 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.271441 4903 log.go:25] "Validated CRI v1 runtime API" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.311787 4903 log.go:25] "Validated CRI v1 image API" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.313794 4903 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.320209 4903 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-15-40-14-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.320258 4903 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.340014 4903 manager.go:217] Machine: {Timestamp:2026-01-28 15:45:28.337285954 +0000 UTC m=+0.613257485 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654116352 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:42f25525-e039-4b4b-9161-1620e166e9cf BootID:9977edb2-96fc-47bd-97a1-108db3bc28fb Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b4:f5:00 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b4:f5:00 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bd:bd:7f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:71:44:cb Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3e:e1:01 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:20:db:90 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:ed:73:bc Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5a:d5:06:ab:2d:8c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3a:e4:c6:ed:48:16 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654116352 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.340368 4903 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.340736 4903 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.342169 4903 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.342660 4903 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.342710 4903 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.343972 4903 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.344015 4903 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.344376 4903 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.344412 4903 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.344741 4903 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.344848 4903 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.348140 4903 kubelet.go:418] "Attempting to sync node with API server" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.348162 4903 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.348177 4903 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.348193 4903 kubelet.go:324] "Adding apiserver pod source" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.348205 4903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.352413 4903 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.353745 4903 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.354073 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.354088 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.354165 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.354167 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.355862 4903 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357329 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357354 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357363 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357370 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357382 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357392 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357401 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357414 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357424 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357432 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357469 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.357477 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.358583 4903 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.359485 4903 server.go:1280] "Started kubelet" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.360616 4903 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:28 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.361618 4903 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.361822 4903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.362602 4903 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.363251 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.363312 4903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.363507 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:16:35.436752278 +0000 UTC Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.363728 4903 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.363752 4903 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.363847 4903 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.364389 4903 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.364569 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.364651 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.364899 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="200ms" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.366618 4903 factory.go:153] Registering CRI-O factory Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.366655 4903 factory.go:221] Registration of the crio container factory successfully Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.366722 4903 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.366733 4903 factory.go:55] Registering systemd factory Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.366741 4903 factory.go:221] Registration of the systemd container factory successfully Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.366766 4903 factory.go:103] Registering Raw factory Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.366786 4903 manager.go:1196] Started watching for new ooms in manager Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.369764 4903 server.go:460] "Adding debug handlers to kubelet server" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.366571 4903 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.251:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eef955b0511e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:45:28.359408101 +0000 UTC m=+0.635379612,LastTimestamp:2026-01-28 15:45:28.359408101 +0000 UTC m=+0.635379612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.371286 4903 manager.go:319] Starting recovery of all containers Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.380841 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.380914 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.380936 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.380956 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.380973 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.380991 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381008 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381026 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381048 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381066 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381084 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381101 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381117 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381137 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381184 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381202 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381219 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381235 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381251 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381269 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381285 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381302 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381321 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381338 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381357 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381374 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381422 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381467 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381489 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.381514 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383432 4903 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383467 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383484 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383498 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383512 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383526 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383558 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383572 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383588 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383601 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383616 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383631 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383644 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383659 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383673 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383686 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383698 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383711 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383726 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383738 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383750 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383764 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383776 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383794 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383810 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383825 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383839 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383852 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383865 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383883 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383897 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383910 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383922 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383933 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383946 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383958 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383984 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.383997 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384009 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384022 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384040 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384055 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384068 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384081 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384094 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384105 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384118 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384131 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384146 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384159 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384172 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384186 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384199 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384213 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384225 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384241 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384255 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384271 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384282 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384295 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384308 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384320 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384334 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384358 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384418 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384434 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384446 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384457 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384469 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384481 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384492 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384503 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384516 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384542 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384556 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384574 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384586 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384599 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384610 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384622 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384635 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384646 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384661 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384672 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384685 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384697 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384711 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384724 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384738 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384750 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384764 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384776 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384788 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384801 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384813 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384826 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384839 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384851 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384873 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384891 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384905 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384917 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384930 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384945 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384961 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384980 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.384994 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385007 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385019 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385033 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385045 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385058 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385070 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385083 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385095 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385108 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385121 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385133 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385145 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385159 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385173 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385185 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385198 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385212 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385225 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385238 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385250 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385261 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385336 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385354 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385368 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385384 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385398 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385409 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385420 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385431 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385445 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385459 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385470 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385483 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385494 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385506 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385518 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385545 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385558 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385571 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385583 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385595 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385611 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385621 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385633 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385644 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385655 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385668 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385717 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385729 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385787 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385799 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385810 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385822 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385836 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385847 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385909 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385924 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385936 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385948 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385961 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385975 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.385988 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386002 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386013 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386025 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386037 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386050 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386063 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386076 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386089 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386101 4903 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386112 4903 reconstruct.go:97] "Volume reconstruction finished" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.386120 4903 reconciler.go:26] "Reconciler: start to sync state" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.395316 4903 manager.go:324] Recovery completed Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.404992 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.407410 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.407453 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.407464 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.408401 4903 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.408423 4903 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.408453 4903 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.408900 4903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.411025 4903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.412079 4903 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.412128 4903 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.412177 4903 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.413864 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.413942 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.434201 4903 policy_none.go:49] "None policy: Start" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.435592 4903 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.435626 4903 state_mem.go:35] "Initializing new in-memory state store" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.464455 4903 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.486543 4903 manager.go:334] "Starting Device Plugin manager" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.487310 4903 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.487346 4903 server.go:79] "Starting device plugin registration server" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.487865 4903 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.487882 4903 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.488012 4903 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.488107 4903 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.488119 4903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.494480 4903 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.512722 4903 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.512820 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.514228 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.514263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.514275 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.514409 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.515248 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.515316 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517069 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517118 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517129 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517255 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517389 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517426 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517438 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.517638 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.518564 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519146 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519250 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519308 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519399 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519407 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519679 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519839 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.519893 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.520942 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.521004 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.521022 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.520958 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.521086 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.521099 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.521233 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.521506 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.521584 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522026 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522141 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522230 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522520 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522650 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522792 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522828 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.522841 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.523963 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.524002 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.524014 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.566395 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="400ms" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.588366 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.588599 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.588780 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.588861 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.588897 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.588922 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.588972 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589035 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589086 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589156 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589209 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589237 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589266 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589290 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589318 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.589360 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.590358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.590389 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.590399 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.590424 4903 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.590799 4903 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690475 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690569 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690601 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690629 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690662 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690684 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690729 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690747 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690780 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690689 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690910 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690815 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690959 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690802 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690969 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.690995 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691013 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691028 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691052 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691061 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691077 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691095 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691134 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691149 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691278 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691313 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691226 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691402 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691404 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.691458 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.791225 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.793368 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.793414 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.793432 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.793465 4903 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.793790 4903 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.843565 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.847349 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.862516 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.869969 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: I0128 15:45:28.875612 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.926458 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a2311ee254994a66ef9b9dd4e954d810b55272ff2209ff33df4820372f41ffed WatchSource:0}: Error finding container a2311ee254994a66ef9b9dd4e954d810b55272ff2209ff33df4820372f41ffed: Status 404 returned error can't find the container with id a2311ee254994a66ef9b9dd4e954d810b55272ff2209ff33df4820372f41ffed Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.927496 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fb784c71581295106aad6740d111e166e2879d2c6dd1c49946fe5f1584f9ce62 WatchSource:0}: Error finding container fb784c71581295106aad6740d111e166e2879d2c6dd1c49946fe5f1584f9ce62: Status 404 returned error can't find the container with id fb784c71581295106aad6740d111e166e2879d2c6dd1c49946fe5f1584f9ce62 Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.937067 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ee35d88a151825c584c7fed7814fd85191dfb542a79dc70fac4145c8f3ede63d WatchSource:0}: Error finding container ee35d88a151825c584c7fed7814fd85191dfb542a79dc70fac4145c8f3ede63d: Status 404 returned error can't find the container with id ee35d88a151825c584c7fed7814fd85191dfb542a79dc70fac4145c8f3ede63d Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.940175 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0347d889097a2342e5c12ed1361d94628cfca262c83d346960ae8fb879078d4c WatchSource:0}: Error finding container 0347d889097a2342e5c12ed1361d94628cfca262c83d346960ae8fb879078d4c: Status 404 returned error can't find the container with id 0347d889097a2342e5c12ed1361d94628cfca262c83d346960ae8fb879078d4c Jan 28 15:45:28 crc kubenswrapper[4903]: W0128 15:45:28.944311 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-cd592256d5aa330678977c80bfd72abdc936e8c288964ec68cbdc94570bf1ebf WatchSource:0}: Error finding container cd592256d5aa330678977c80bfd72abdc936e8c288964ec68cbdc94570bf1ebf: Status 404 returned error can't find the container with id cd592256d5aa330678977c80bfd72abdc936e8c288964ec68cbdc94570bf1ebf Jan 28 15:45:28 crc kubenswrapper[4903]: E0128 15:45:28.967123 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="800ms" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.194675 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.197093 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.197168 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.197187 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.197225 4903 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:45:29 crc kubenswrapper[4903]: E0128 15:45:29.197855 4903 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.361408 4903 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.364600 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:02:44.620035615 +0000 UTC Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.416371 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd592256d5aa330678977c80bfd72abdc936e8c288964ec68cbdc94570bf1ebf"} Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.417295 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0347d889097a2342e5c12ed1361d94628cfca262c83d346960ae8fb879078d4c"} Jan 28 15:45:29 crc kubenswrapper[4903]: W0128 15:45:29.417639 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:29 crc kubenswrapper[4903]: E0128 15:45:29.417701 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.418021 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ee35d88a151825c584c7fed7814fd85191dfb542a79dc70fac4145c8f3ede63d"} Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.418861 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb784c71581295106aad6740d111e166e2879d2c6dd1c49946fe5f1584f9ce62"} Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.419580 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a2311ee254994a66ef9b9dd4e954d810b55272ff2209ff33df4820372f41ffed"} Jan 28 15:45:29 crc kubenswrapper[4903]: W0128 15:45:29.435089 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:29 crc kubenswrapper[4903]: E0128 15:45:29.435144 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:29 crc kubenswrapper[4903]: W0128 15:45:29.687144 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:29 crc kubenswrapper[4903]: E0128 15:45:29.687236 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:29 crc kubenswrapper[4903]: E0128 15:45:29.768551 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="1.6s" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.997973 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.999346 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.999373 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.999382 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:29 crc kubenswrapper[4903]: I0128 15:45:29.999404 4903 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:45:29 crc kubenswrapper[4903]: E0128 15:45:29.999865 4903 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Jan 28 15:45:30 crc kubenswrapper[4903]: W0128 15:45:30.000378 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:30 crc kubenswrapper[4903]: E0128 15:45:30.000496 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.349952 4903 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:45:30 crc kubenswrapper[4903]: E0128 15:45:30.350962 4903 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.361732 4903 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.364887 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:47:23.870481124 +0000 UTC Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.422955 4903 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b" exitCode=0 Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.423028 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.423038 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b"} Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.424066 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.424101 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.424112 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.425559 4903 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159" exitCode=0 Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.425626 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159"} Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.425685 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.426595 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.426622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.426633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.429036 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009"} Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.430885 4903 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b" exitCode=0 Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.430956 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b"} Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.430994 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.431875 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.431900 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.431912 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.432246 4903 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e" exitCode=0 Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.432278 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e"} Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.432353 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.433372 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.433407 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.433415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.434872 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.438951 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.438980 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:30 crc kubenswrapper[4903]: I0128 15:45:30.438989 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:31 crc kubenswrapper[4903]: W0128 15:45:31.037548 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:31 crc kubenswrapper[4903]: E0128 15:45:31.037615 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:31 crc kubenswrapper[4903]: W0128 15:45:31.291957 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:31 crc kubenswrapper[4903]: E0128 15:45:31.292081 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.361433 4903 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.365940 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:51:55.187932341 +0000 UTC Jan 28 15:45:31 crc kubenswrapper[4903]: E0128 15:45:31.369247 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="3.2s" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.437928 4903 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58" exitCode=0 Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.437993 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.438098 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.439927 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.439955 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.439968 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.440810 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.440935 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.441753 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.441774 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.441785 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.443803 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.443826 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.443837 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.443840 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.444398 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.444423 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.444448 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.450736 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.450782 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.450792 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.450867 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.452134 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.452162 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.452171 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.453762 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.453805 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.453817 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a"} Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.600816 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.602214 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.602248 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.602256 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:31 crc kubenswrapper[4903]: I0128 15:45:31.602288 4903 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:45:31 crc kubenswrapper[4903]: E0128 15:45:31.602730 4903 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.361949 4903 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.366065 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:22:48.408393492 +0000 UTC Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.459797 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf"} Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.459861 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e"} Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.459927 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.460815 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.460855 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.460866 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.461477 4903 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55" exitCode=0 Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.461597 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55"} Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.461615 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.461653 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.461698 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.461616 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.461666 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462331 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462368 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462758 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462780 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462791 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462811 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462829 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462840 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462865 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462880 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:32 crc kubenswrapper[4903]: I0128 15:45:32.462881 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:32 crc kubenswrapper[4903]: W0128 15:45:32.555370 4903 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Jan 28 15:45:32 crc kubenswrapper[4903]: E0128 15:45:32.555587 4903 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.172934 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.182231 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.366302 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:46:01.492652679 +0000 UTC Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470030 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3"} Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470083 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9"} Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470096 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a"} Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470108 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df"} Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470117 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470155 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470173 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470117 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.470204 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471187 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471216 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471239 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471254 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471281 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471300 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471313 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471218 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:33 crc kubenswrapper[4903]: I0128 15:45:33.471485 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.125039 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.366779 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:58:00.499310088 +0000 UTC Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.479876 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa"} Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.479921 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.480002 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.480257 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.481865 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.481944 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.481971 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.482148 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.482174 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.482179 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.482217 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.482187 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.482238 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.750909 4903 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.803671 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.805332 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.805379 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.805397 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:34 crc kubenswrapper[4903]: I0128 15:45:34.805431 4903 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.367354 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:29:09.237572013 +0000 UTC Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.482784 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.482784 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.483882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.483931 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.483942 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.484325 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.484376 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:35 crc kubenswrapper[4903]: I0128 15:45:35.484391 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:36 crc kubenswrapper[4903]: I0128 15:45:36.180444 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:36 crc kubenswrapper[4903]: I0128 15:45:36.367804 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:02:20.14568009 +0000 UTC Jan 28 15:45:36 crc kubenswrapper[4903]: I0128 15:45:36.485431 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:36 crc kubenswrapper[4903]: I0128 15:45:36.486211 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:36 crc kubenswrapper[4903]: I0128 15:45:36.486244 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:36 crc kubenswrapper[4903]: I0128 15:45:36.486254 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:37 crc kubenswrapper[4903]: I0128 15:45:37.368679 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:59:25.990679624 +0000 UTC Jan 28 15:45:37 crc kubenswrapper[4903]: I0128 15:45:37.718579 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 15:45:37 crc kubenswrapper[4903]: I0128 15:45:37.718744 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:37 crc kubenswrapper[4903]: I0128 15:45:37.719860 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:37 crc kubenswrapper[4903]: I0128 15:45:37.719896 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:37 crc kubenswrapper[4903]: I0128 15:45:37.719905 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:38 crc kubenswrapper[4903]: I0128 15:45:38.369598 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:06:54.28818657 +0000 UTC Jan 28 15:45:38 crc kubenswrapper[4903]: E0128 15:45:38.494716 4903 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:45:39 crc kubenswrapper[4903]: I0128 15:45:39.370318 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:36:22.997732767 +0000 UTC Jan 28 15:45:39 crc kubenswrapper[4903]: I0128 15:45:39.716861 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:39 crc kubenswrapper[4903]: I0128 15:45:39.717130 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:39 crc kubenswrapper[4903]: I0128 15:45:39.718606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:39 crc kubenswrapper[4903]: I0128 15:45:39.718695 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:39 crc kubenswrapper[4903]: I0128 15:45:39.718725 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:40 crc kubenswrapper[4903]: I0128 15:45:40.371342 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:53:55.921561421 +0000 UTC Jan 28 15:45:40 crc kubenswrapper[4903]: I0128 15:45:40.638602 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:40 crc kubenswrapper[4903]: I0128 15:45:40.638871 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:40 crc kubenswrapper[4903]: I0128 15:45:40.640605 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:40 crc kubenswrapper[4903]: I0128 15:45:40.640705 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:40 crc kubenswrapper[4903]: I0128 15:45:40.640734 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:40 crc kubenswrapper[4903]: I0128 15:45:40.646419 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.372066 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:11:43.529323858 +0000 UTC Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.500273 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.502273 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.502341 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.502409 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.973429 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.973610 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.974803 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.974826 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:41 crc kubenswrapper[4903]: I0128 15:45:41.974834 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:42 crc kubenswrapper[4903]: I0128 15:45:42.372704 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:37:05.651375803 +0000 UTC Jan 28 15:45:42 crc kubenswrapper[4903]: I0128 15:45:42.766907 4903 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:45:42 crc kubenswrapper[4903]: I0128 15:45:42.766994 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:45:42 crc kubenswrapper[4903]: I0128 15:45:42.770464 4903 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:45:42 crc kubenswrapper[4903]: I0128 15:45:42.770554 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:45:43 crc kubenswrapper[4903]: I0128 15:45:43.374289 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:57:41.798858938 +0000 UTC Jan 28 15:45:43 crc kubenswrapper[4903]: I0128 15:45:43.639224 4903 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:45:43 crc kubenswrapper[4903]: I0128 15:45:43.639301 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:45:44 crc kubenswrapper[4903]: I0128 15:45:44.375160 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:09:05.404610158 +0000 UTC Jan 28 15:45:45 crc kubenswrapper[4903]: I0128 15:45:45.375342 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:29:13.665313646 +0000 UTC Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.187152 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.187320 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.188359 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.188392 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.188402 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.192582 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.375771 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:59:39.3267614 +0000 UTC Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.512086 4903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.512139 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.512947 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.512987 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:46 crc kubenswrapper[4903]: I0128 15:45:46.513003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.376316 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:47:51.849422796 +0000 UTC Jan 28 15:45:47 crc kubenswrapper[4903]: E0128 15:45:47.750579 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.753064 4903 trace.go:236] Trace[777151393]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:45:37.158) (total time: 10594ms): Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[777151393]: ---"Objects listed" error: 10594ms (15:45:47.752) Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[777151393]: [10.59463087s] [10.59463087s] END Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.753097 4903 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.753077 4903 trace.go:236] Trace[195784004]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:45:32.754) (total time: 14998ms): Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[195784004]: ---"Objects listed" error: 14998ms (15:45:47.752) Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[195784004]: [14.998371357s] [14.998371357s] END Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.753142 4903 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:47 crc kubenswrapper[4903]: E0128 15:45:47.755330 4903 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.755797 4903 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.755821 4903 trace.go:236] Trace[1057586984]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:45:36.330) (total time: 11425ms): Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[1057586984]: ---"Objects listed" error: 11425ms (15:45:47.755) Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[1057586984]: [11.425430441s] [11.425430441s] END Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.755834 4903 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.760731 4903 trace.go:236] Trace[938442519]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:45:34.866) (total time: 12893ms): Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[938442519]: ---"Objects listed" error: 12893ms (15:45:47.760) Jan 28 15:45:47 crc kubenswrapper[4903]: Trace[938442519]: [12.893850531s] [12.893850531s] END Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.760768 4903 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.769692 4903 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.791669 4903 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35284->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.791769 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35284->192.168.126.11:17697: read: connection reset by peer" Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.792899 4903 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.792963 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.793156 4903 csr.go:261] certificate signing request csr-5zscc is approved, waiting to be issued Jan 28 15:45:47 crc kubenswrapper[4903]: I0128 15:45:47.810385 4903 csr.go:257] certificate signing request csr-5zscc is issued Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.216540 4903 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 15:45:48 crc kubenswrapper[4903]: W0128 15:45:48.216860 4903 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:45:48 crc kubenswrapper[4903]: W0128 15:45:48.216923 4903 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:45:48 crc kubenswrapper[4903]: W0128 15:45:48.216929 4903 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.216936 4903 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.251:57170->38.102.83.251:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188eef957da807fb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:45:28.940513275 +0000 UTC m=+1.216484786,LastTimestamp:2026-01-28 15:45:28.940513275 +0000 UTC m=+1.216484786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:45:48 crc kubenswrapper[4903]: W0128 15:45:48.217150 4903 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.360869 4903 apiserver.go:52] "Watching apiserver" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.367005 4903 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.367333 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.367731 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.367895 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.367990 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.368027 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.368132 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.368176 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.368302 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.369045 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.369100 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.369460 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.373394 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.373703 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.373941 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.374076 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.374150 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.374246 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.374349 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.374482 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.376601 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:52:56.468507779 +0000 UTC Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.394828 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.406732 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.434203 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.456793 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.464831 4903 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.489187 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.506878 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.517800 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.519280 4903 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf" exitCode=255 Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.519321 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf"} Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.521964 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.532276 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.534015 4903 scope.go:117] "RemoveContainer" containerID="0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.535256 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.542629 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.556002 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.560941 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.560992 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561041 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561063 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561087 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561109 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561127 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561147 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561170 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561191 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561211 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561228 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561247 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561266 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561305 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561310 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561328 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561348 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561371 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561399 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561425 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561463 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561487 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561510 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561545 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561553 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561597 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561625 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561651 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561675 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561699 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561719 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561741 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561762 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561783 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561809 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561831 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561852 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561879 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561904 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561928 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561951 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.561975 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562000 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562021 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562126 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562153 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562177 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562199 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562223 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562229 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562246 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562270 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562293 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562283 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562309 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562309 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562474 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562550 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562765 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562838 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.562317 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563063 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563092 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563158 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563195 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563219 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563245 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563275 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563305 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563327 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563348 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563371 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563392 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563410 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563428 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563454 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563475 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563495 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563517 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563553 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563576 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563595 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563617 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563640 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563662 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563683 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563706 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563738 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563764 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563785 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563805 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563830 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563853 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563876 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563900 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563922 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563944 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563967 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.563994 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564019 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564047 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564071 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564095 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564120 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564145 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564171 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564233 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564256 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564280 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564303 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564326 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564348 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564371 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564395 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564513 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564576 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564597 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564616 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564638 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564659 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564678 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564699 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564722 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564743 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564763 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564784 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564803 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564824 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564848 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564870 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564895 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564921 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564947 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.564990 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565015 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565042 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565069 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565094 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565119 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565143 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565168 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565193 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565216 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565240 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565267 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565289 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565311 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565333 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565351 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565369 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565389 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565408 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565428 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565450 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565470 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565501 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565545 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565567 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565586 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565608 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565626 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565644 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565664 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565683 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565701 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565718 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565738 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565755 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565772 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565789 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565808 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565829 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565854 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565885 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565909 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565929 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565958 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565978 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.565997 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566017 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566037 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566166 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566189 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566207 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566226 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566245 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566263 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566286 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566306 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566326 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566319 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566347 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566390 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566381 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566465 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566480 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.566504 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:45:49.066475408 +0000 UTC m=+21.342446919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566544 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566682 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566706 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566705 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566718 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.566972 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567110 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567371 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567413 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567444 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567476 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567505 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567549 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567581 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567617 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.567650 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.568844 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569096 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569107 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569135 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569144 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569386 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569476 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569552 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569618 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569644 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569867 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569922 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570124 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570194 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570235 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.569655 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570213 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570302 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570523 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.570637 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570701 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570823 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.570947 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.571719 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.571925 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.571998 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:49.071975008 +0000 UTC m=+21.347946519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.572048 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.572560 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.572836 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.573240 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.576323 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.576721 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.576763 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.576970 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577079 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577102 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577114 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577222 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577260 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577292 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577317 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.577726 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.578203 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.578317 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.578833 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.578839 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.578956 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.578972 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579145 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579184 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579282 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579610 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579729 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579796 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579804 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579827 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579958 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580033 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580240 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.579959 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580655 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580392 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580856 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580858 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580954 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581128 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581169 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581393 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.580808 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581690 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581612 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581834 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581913 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.581925 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582015 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582253 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582300 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582351 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582468 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582500 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582549 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582744 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.582848 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582876 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.582902 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:49.082885947 +0000 UTC m=+21.358857458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583111 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583164 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583329 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.582316 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583517 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583548 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583610 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583665 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583754 4903 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583904 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.583933 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.584123 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.585100 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.585495 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.585643 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.586494 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.586768 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.586788 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587067 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587080 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587200 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587242 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587506 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587584 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587632 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.587657 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.588694 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589064 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589694 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589717 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589730 4903 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589739 4903 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589750 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589760 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589770 4903 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589780 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589792 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589804 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589817 4903 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589832 4903 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589846 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589858 4903 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589870 4903 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589882 4903 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589893 4903 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589902 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589913 4903 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589922 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589932 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589944 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589955 4903 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589964 4903 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589974 4903 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589983 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.589994 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590003 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590013 4903 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590023 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590032 4903 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590041 4903 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590050 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590061 4903 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590078 4903 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590095 4903 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590108 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590119 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590128 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590137 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590147 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590156 4903 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590165 4903 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590177 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590186 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.590197 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.596105 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.598386 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.600427 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.600463 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.600642 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.600713 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.601216 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.601240 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.601340 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.601352 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.601665 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:49.101647905 +0000 UTC m=+21.377619416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.601914 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.600802 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.600996 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.602859 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.603208 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.604417 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.604907 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.605663 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.606448 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.606761 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.608095 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.608675 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.608733 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.608166 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.608446 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.608963 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.609079 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.609145 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.609133 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.609294 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.609379 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.609568 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.609601 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.609626 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.609652 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: E0128 15:45:48.609738 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:49.109714901 +0000 UTC m=+21.385686412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.609897 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.610067 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.610390 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.610439 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.610835 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611072 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611398 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611443 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611580 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611753 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611620 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611663 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611700 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.611709 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.612034 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.612143 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.612158 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.612197 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.612648 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613160 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613194 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613284 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613290 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613485 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613503 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613593 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.613727 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.614063 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.614368 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.614514 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.614599 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.614794 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.615115 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.614636 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.616125 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.616383 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.616494 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.616782 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.616784 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.616956 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.619251 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.620683 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.622446 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.622656 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.623092 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.623308 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.623362 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.624084 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.624596 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.624873 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.625130 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.625251 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.625428 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626042 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626094 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626102 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626121 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626025 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626297 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626343 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.626751 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.631623 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.633571 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.635709 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.635874 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.643431 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.645703 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.654357 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.663793 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.673632 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.684838 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691197 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691254 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691302 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691315 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691327 4903 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691340 4903 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691353 4903 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691359 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691364 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691391 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691398 4903 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691421 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691432 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691443 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691453 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691464 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691474 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691485 4903 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691498 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691509 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691520 4903 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691549 4903 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691561 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691573 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691585 4903 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691598 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691610 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691621 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691632 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691643 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691654 4903 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691665 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691676 4903 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691689 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691700 4903 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691710 4903 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691722 4903 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691733 4903 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691745 4903 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691755 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691767 4903 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691778 4903 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691789 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691802 4903 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691814 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691827 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691839 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691850 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691862 4903 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691879 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691890 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691902 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691926 4903 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691939 4903 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691951 4903 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691961 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691973 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691986 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.691997 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692011 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692023 4903 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692035 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692047 4903 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692059 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692071 4903 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692083 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692097 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692109 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692199 4903 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692214 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692227 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692240 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692289 4903 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692301 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692313 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692338 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692352 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692362 4903 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692372 4903 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692384 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692395 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692406 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692417 4903 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692428 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692439 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692450 4903 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692461 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692472 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692484 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692494 4903 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692505 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692518 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692544 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692557 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692568 4903 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692579 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692591 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692607 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692619 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692631 4903 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692643 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692654 4903 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692666 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692678 4903 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692697 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692708 4903 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692719 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692731 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692742 4903 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692753 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692765 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692776 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692787 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692799 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692811 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692822 4903 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692833 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692845 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692856 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692867 4903 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692877 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692889 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692899 4903 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692909 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692919 4903 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.692929 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693210 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693222 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693233 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693268 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693280 4903 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693292 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693303 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693314 4903 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693326 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693338 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693349 4903 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693360 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693372 4903 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693383 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693394 4903 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693406 4903 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693417 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693429 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693440 4903 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693450 4903 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693460 4903 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693470 4903 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693482 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693493 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693504 4903 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693515 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.693579 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.700985 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.811783 4903 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 15:40:47 +0000 UTC, rotation deadline is 2026-10-31 03:39:48.020466145 +0000 UTC Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.811885 4903 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6611h53m59.208584619s for next certificate rotation Jan 28 15:45:48 crc kubenswrapper[4903]: I0128 15:45:48.983401 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:45:48 crc kubenswrapper[4903]: W0128 15:45:48.994575 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-70446c01d885d49290d8b70a200c05e86810ea94e483072bb0794fdb60b03e0f WatchSource:0}: Error finding container 70446c01d885d49290d8b70a200c05e86810ea94e483072bb0794fdb60b03e0f: Status 404 returned error can't find the container with id 70446c01d885d49290d8b70a200c05e86810ea94e483072bb0794fdb60b03e0f Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.097308 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.097393 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.097436 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:45:50.097419315 +0000 UTC m=+22.373390826 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.097469 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.097477 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.097515 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:50.097506227 +0000 UTC m=+22.373477738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.097554 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.097585 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:50.09757887 +0000 UTC m=+22.373550381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.198370 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.198416 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198581 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198600 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198614 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198668 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:50.198649207 +0000 UTC m=+22.474620728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198729 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198745 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198755 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.198781 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:50.19877237 +0000 UTC m=+22.474743891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.376683 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:25:13.042793356 +0000 UTC Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.412329 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:49 crc kubenswrapper[4903]: E0128 15:45:49.412683 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.523491 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.525373 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2"} Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.525779 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.526337 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"70446c01d885d49290d8b70a200c05e86810ea94e483072bb0794fdb60b03e0f"} Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.527844 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233"} Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.527870 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521"} Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.527880 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4817ff85ecaf7455f5938159e49b0119a1ba3716f4b2cb7df025dc2530043f88"} Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.529703 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da"} Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.529728 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"53bb0e881084136107f116b009640f4dd15e42fd356b2d0683f1e72a975740c6"} Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.555044 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.569270 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vxz6b"] Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.569563 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.572602 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.572832 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.573016 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.577050 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.602210 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/466b540b-3447-4d30-a2e5-8c7755027e99-hosts-file\") pod \"node-resolver-vxz6b\" (UID: \"466b540b-3447-4d30-a2e5-8c7755027e99\") " pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.602292 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jlv\" (UniqueName: \"kubernetes.io/projected/466b540b-3447-4d30-a2e5-8c7755027e99-kube-api-access-57jlv\") pod \"node-resolver-vxz6b\" (UID: \"466b540b-3447-4d30-a2e5-8c7755027e99\") " pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.607866 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.655783 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.667026 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.680195 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.699337 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.703558 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57jlv\" (UniqueName: \"kubernetes.io/projected/466b540b-3447-4d30-a2e5-8c7755027e99-kube-api-access-57jlv\") pod \"node-resolver-vxz6b\" (UID: \"466b540b-3447-4d30-a2e5-8c7755027e99\") " pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.703629 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/466b540b-3447-4d30-a2e5-8c7755027e99-hosts-file\") pod \"node-resolver-vxz6b\" (UID: \"466b540b-3447-4d30-a2e5-8c7755027e99\") " pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.703726 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/466b540b-3447-4d30-a2e5-8c7755027e99-hosts-file\") pod \"node-resolver-vxz6b\" (UID: \"466b540b-3447-4d30-a2e5-8c7755027e99\") " pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.715092 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.722307 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57jlv\" (UniqueName: \"kubernetes.io/projected/466b540b-3447-4d30-a2e5-8c7755027e99-kube-api-access-57jlv\") pod \"node-resolver-vxz6b\" (UID: \"466b540b-3447-4d30-a2e5-8c7755027e99\") " pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.724665 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.738239 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.750547 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.762686 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.774417 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.784867 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.803753 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.882270 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vxz6b" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.980616 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-plxzk"] Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.981038 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.982431 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5c5kq"] Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.982826 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.983443 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7g6pn"] Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.983463 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.983687 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.983697 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.983836 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.984074 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7g6pn" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.987637 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.992978 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.993076 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.993151 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.993347 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.993458 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.993654 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:45:49 crc kubenswrapper[4903]: I0128 15:45:49.993727 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.021514 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.046546 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.061125 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.073512 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.092074 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106446 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106554 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-cnibin\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106574 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-cni-multus\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106590 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-netns\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106605 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-cni-bin\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106620 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cnibin\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106636 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dacf7a8c-d645-4596-9266-092101fc3613-mcd-auth-proxy-config\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.106662 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:45:52.106634866 +0000 UTC m=+24.382606377 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106708 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-multus-certs\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106792 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dacf7a8c-d645-4596-9266-092101fc3613-proxy-tls\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106854 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-conf-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106883 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-daemon-config\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106908 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-etc-kubernetes\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106927 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/368501de-b207-4b6b-a0fb-eba74fe5ec74-cni-binary-copy\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106950 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcrs2\" (UniqueName: \"kubernetes.io/projected/368501de-b207-4b6b-a0fb-eba74fe5ec74-kube-api-access-jcrs2\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.106969 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cni-binary-copy\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107025 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-cni-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107055 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4ps\" (UniqueName: \"kubernetes.io/projected/0566b7c5-190a-4000-9e3c-ff9d91235ccd-kube-api-access-6f4ps\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107087 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107148 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88bsb\" (UniqueName: \"kubernetes.io/projected/dacf7a8c-d645-4596-9266-092101fc3613-kube-api-access-88bsb\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107177 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-hostroot\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107195 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-system-cni-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107217 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-kubelet\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.107300 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107339 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-system-cni-dir\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107368 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.107420 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:52.107403096 +0000 UTC m=+24.383374617 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107519 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.107658 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107700 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-os-release\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107720 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.107749 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:52.107736344 +0000 UTC m=+24.383707915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107802 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-socket-dir-parent\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107831 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-k8s-cni-cncf-io\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107868 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107900 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dacf7a8c-d645-4596-9266-092101fc3613-rootfs\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.107921 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-os-release\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.131836 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.145007 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.171730 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.185707 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.198052 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208493 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dacf7a8c-d645-4596-9266-092101fc3613-proxy-tls\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208569 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-multus-certs\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208593 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-conf-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208614 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-daemon-config\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208637 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-etc-kubernetes\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208660 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208683 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-cni-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208710 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/368501de-b207-4b6b-a0fb-eba74fe5ec74-cni-binary-copy\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208733 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcrs2\" (UniqueName: \"kubernetes.io/projected/368501de-b207-4b6b-a0fb-eba74fe5ec74-kube-api-access-jcrs2\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208758 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cni-binary-copy\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208737 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-multus-certs\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208779 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f4ps\" (UniqueName: \"kubernetes.io/projected/0566b7c5-190a-4000-9e3c-ff9d91235ccd-kube-api-access-6f4ps\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208882 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88bsb\" (UniqueName: \"kubernetes.io/projected/dacf7a8c-d645-4596-9266-092101fc3613-kube-api-access-88bsb\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208908 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-hostroot\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208934 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-kubelet\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208954 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-system-cni-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.208969 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-system-cni-dir\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209006 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-os-release\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209027 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209082 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dacf7a8c-d645-4596-9266-092101fc3613-rootfs\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209102 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-os-release\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209112 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-conf-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209120 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-socket-dir-parent\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209150 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-k8s-cni-cncf-io\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209167 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209186 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209203 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-cnibin\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209218 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-cni-multus\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209238 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dacf7a8c-d645-4596-9266-092101fc3613-mcd-auth-proxy-config\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209255 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-netns\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209270 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-cni-bin\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209283 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-socket-dir-parent\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209287 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cnibin\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209310 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cnibin\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209850 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-k8s-cni-cncf-io\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209871 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-daemon-config\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.209901 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-etc-kubernetes\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.209996 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210004 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-run-netns\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210056 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-system-cni-dir\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.210021 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.210095 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.210138 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:52.210123835 +0000 UTC m=+24.486095346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210134 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-cni-bin\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.210156 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210206 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-multus-cni-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.210221 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.210240 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210253 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-kubelet\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.210293 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:52.210274388 +0000 UTC m=+24.486245909 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210344 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-system-cni-dir\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210348 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-cnibin\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210375 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-hostroot\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210414 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/dacf7a8c-d645-4596-9266-092101fc3613-rootfs\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210484 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-os-release\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210557 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-os-release\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210657 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/368501de-b207-4b6b-a0fb-eba74fe5ec74-cni-binary-copy\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210678 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cni-binary-copy\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210809 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0566b7c5-190a-4000-9e3c-ff9d91235ccd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210916 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/368501de-b207-4b6b-a0fb-eba74fe5ec74-host-var-lib-cni-multus\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.210954 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dacf7a8c-d645-4596-9266-092101fc3613-mcd-auth-proxy-config\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.211002 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0566b7c5-190a-4000-9e3c-ff9d91235ccd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.213806 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.213970 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dacf7a8c-d645-4596-9266-092101fc3613-proxy-tls\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.224784 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f4ps\" (UniqueName: \"kubernetes.io/projected/0566b7c5-190a-4000-9e3c-ff9d91235ccd-kube-api-access-6f4ps\") pod \"multus-additional-cni-plugins-5c5kq\" (UID: \"0566b7c5-190a-4000-9e3c-ff9d91235ccd\") " pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.227385 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.227465 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88bsb\" (UniqueName: \"kubernetes.io/projected/dacf7a8c-d645-4596-9266-092101fc3613-kube-api-access-88bsb\") pod \"machine-config-daemon-plxzk\" (UID: \"dacf7a8c-d645-4596-9266-092101fc3613\") " pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.227566 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcrs2\" (UniqueName: \"kubernetes.io/projected/368501de-b207-4b6b-a0fb-eba74fe5ec74-kube-api-access-jcrs2\") pod \"multus-7g6pn\" (UID: \"368501de-b207-4b6b-a0fb-eba74fe5ec74\") " pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.238352 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.249848 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.262163 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.282176 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.296549 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.296858 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:45:50 crc kubenswrapper[4903]: W0128 15:45:50.307337 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddacf7a8c_d645_4596_9266_092101fc3613.slice/crio-126d12e723bee1c6af08b2a4ca0924b5633a66d3be492891a9287a11bc375d16 WatchSource:0}: Error finding container 126d12e723bee1c6af08b2a4ca0924b5633a66d3be492891a9287a11bc375d16: Status 404 returned error can't find the container with id 126d12e723bee1c6af08b2a4ca0924b5633a66d3be492891a9287a11bc375d16 Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.310377 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.314405 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.322320 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7g6pn" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.329440 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: W0128 15:45:50.335152 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0566b7c5_190a_4000_9e3c_ff9d91235ccd.slice/crio-1dfd74826305fb74c6c9b430782941e825f86c94b7f274d8aa7ee36d637f23a0 WatchSource:0}: Error finding container 1dfd74826305fb74c6c9b430782941e825f86c94b7f274d8aa7ee36d637f23a0: Status 404 returned error can't find the container with id 1dfd74826305fb74c6c9b430782941e825f86c94b7f274d8aa7ee36d637f23a0 Jan 28 15:45:50 crc kubenswrapper[4903]: W0128 15:45:50.344409 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368501de_b207_4b6b_a0fb_eba74fe5ec74.slice/crio-4ff2d436d6a38b8d794a268d70c2dd1d457d0bc5b26bd2c60d2315991dec5a2d WatchSource:0}: Error finding container 4ff2d436d6a38b8d794a268d70c2dd1d457d0bc5b26bd2c60d2315991dec5a2d: Status 404 returned error can't find the container with id 4ff2d436d6a38b8d794a268d70c2dd1d457d0bc5b26bd2c60d2315991dec5a2d Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.376943 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:35:26.78995418 +0000 UTC Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.401038 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dwbc4"] Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.402469 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.405381 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.405620 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.405751 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.405979 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.407397 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.407442 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.409329 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.413270 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.413595 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.413412 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:50 crc kubenswrapper[4903]: E0128 15:45:50.413943 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.419172 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.420389 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.421498 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.422522 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.425563 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.426575 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.428005 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.429719 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.430431 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.431722 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.432777 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.433551 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.435161 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.435919 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.437318 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.438038 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.439353 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.440161 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.440942 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.441245 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.442041 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.442888 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.444029 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.444781 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.445397 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.446797 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.447364 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.448668 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.449547 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.451008 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.451926 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.453059 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.453976 4903 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.454172 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.455311 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.457069 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.457813 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.458377 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.460429 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.461777 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.462512 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.465888 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.467115 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.467337 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.468117 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.470457 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.471949 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.472739 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.473753 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.476277 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.477040 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.480088 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.480778 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.481772 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.482253 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.482876 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.483883 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.484348 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.484722 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.500625 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.510929 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-systemd-units\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.510997 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-netd\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511049 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwk55\" (UniqueName: \"kubernetes.io/projected/29cc3edd-9664-4899-b496-47543927e256-kube-api-access-nwk55\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511074 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-systemd\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511092 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-ovn-kubernetes\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511138 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511160 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-env-overrides\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511204 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511226 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-script-lib\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511286 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-log-socket\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511373 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29cc3edd-9664-4899-b496-47543927e256-ovn-node-metrics-cert\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511440 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-kubelet\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511465 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-var-lib-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511486 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-etc-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511553 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-slash\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511613 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-node-log\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511643 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-bin\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511686 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-netns\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511710 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-ovn\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.511729 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-config\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.514570 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.530788 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.534876 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerStarted","Data":"ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f"} Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.534922 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerStarted","Data":"4ff2d436d6a38b8d794a268d70c2dd1d457d0bc5b26bd2c60d2315991dec5a2d"} Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.536272 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vxz6b" event={"ID":"466b540b-3447-4d30-a2e5-8c7755027e99","Type":"ContainerStarted","Data":"c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885"} Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.536301 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vxz6b" event={"ID":"466b540b-3447-4d30-a2e5-8c7755027e99","Type":"ContainerStarted","Data":"70a607b70a383047cf30bffe91744ec88a64b39636f04a90ff98d16211f3344d"} Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.537302 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerStarted","Data":"1dfd74826305fb74c6c9b430782941e825f86c94b7f274d8aa7ee36d637f23a0"} Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.538650 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621"} Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.538689 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"126d12e723bee1c6af08b2a4ca0924b5633a66d3be492891a9287a11bc375d16"} Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.548666 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.563343 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.574181 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.588882 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.601877 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612374 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-kubelet\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612417 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-var-lib-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612439 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-etc-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612516 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-slash\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612557 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-bin\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612552 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-kubelet\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612599 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-var-lib-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612579 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-node-log\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612690 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-netns\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612714 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-ovn\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612736 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-config\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612778 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-systemd-units\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612808 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-netd\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612835 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-systemd\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612860 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-ovn-kubernetes\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612881 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612908 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-env-overrides\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612933 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwk55\" (UniqueName: \"kubernetes.io/projected/29cc3edd-9664-4899-b496-47543927e256-kube-api-access-nwk55\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612958 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612997 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612996 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-script-lib\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613026 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-netns\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613050 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-ovn\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613094 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-log-socket\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613129 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29cc3edd-9664-4899-b496-47543927e256-ovn-node-metrics-cert\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-slash\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613241 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-bin\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613302 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-ovn-kubernetes\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613334 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-systemd-units\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613363 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-netd\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613390 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-systemd\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.612621 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-node-log\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613431 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-etc-openvswitch\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613732 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-config\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613873 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-log-socket\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.613886 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.614240 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-script-lib\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.614361 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-env-overrides\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.624452 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29cc3edd-9664-4899-b496-47543927e256-ovn-node-metrics-cert\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.629059 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.629897 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwk55\" (UniqueName: \"kubernetes.io/projected/29cc3edd-9664-4899-b496-47543927e256-kube-api-access-nwk55\") pod \"ovnkube-node-dwbc4\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.643425 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.646812 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.647955 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.653370 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.662329 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.675930 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.691584 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.702385 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.712284 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.717462 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.726517 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.746643 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.755855 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.768303 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.795265 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.808150 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.828021 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.850246 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.863857 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.882704 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.900350 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.916451 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.931243 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.949028 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.964695 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.981687 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:50 crc kubenswrapper[4903]: I0128 15:45:50.996018 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.378154 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:42:08.132644848 +0000 UTC Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.412825 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:51 crc kubenswrapper[4903]: E0128 15:45:51.412967 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.524041 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xzz6z"] Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.524372 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.527548 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.527679 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.528325 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.528474 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.541434 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.547138 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f"} Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.548802 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12" exitCode=0 Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.548866 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.548886 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"142b2a4f165086b669ab2b0f49ae91eed4de506993f79d8997e56c996b4f67b9"} Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.550467 4903 generic.go:334] "Generic (PLEG): container finished" podID="0566b7c5-190a-4000-9e3c-ff9d91235ccd" containerID="90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f" exitCode=0 Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.550637 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerDied","Data":"90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f"} Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.552470 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87"} Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.561441 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.579353 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.596183 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.614856 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.623756 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e8165e7-4fdc-495d-9408-87fca9df790e-serviceca\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.623901 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e8165e7-4fdc-495d-9408-87fca9df790e-host\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.624026 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfzj2\" (UniqueName: \"kubernetes.io/projected/6e8165e7-4fdc-495d-9408-87fca9df790e-kube-api-access-lfzj2\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.645455 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.661321 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.677754 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.691823 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.708200 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.725171 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e8165e7-4fdc-495d-9408-87fca9df790e-serviceca\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.725239 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e8165e7-4fdc-495d-9408-87fca9df790e-host\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.725294 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfzj2\" (UniqueName: \"kubernetes.io/projected/6e8165e7-4fdc-495d-9408-87fca9df790e-kube-api-access-lfzj2\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.725449 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e8165e7-4fdc-495d-9408-87fca9df790e-host\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.726236 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e8165e7-4fdc-495d-9408-87fca9df790e-serviceca\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.728975 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.743875 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.744363 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfzj2\" (UniqueName: \"kubernetes.io/projected/6e8165e7-4fdc-495d-9408-87fca9df790e-kube-api-access-lfzj2\") pod \"node-ca-xzz6z\" (UID: \"6e8165e7-4fdc-495d-9408-87fca9df790e\") " pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.758689 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.772427 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.784691 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.800729 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.813866 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.825246 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.837287 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xzz6z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.837340 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: W0128 15:45:51.850537 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8165e7_4fdc_495d_9408_87fca9df790e.slice/crio-de8aa2386065c69ad8c6cb3b9d6172cc99737a23440a54c1754072995438607f WatchSource:0}: Error finding container de8aa2386065c69ad8c6cb3b9d6172cc99737a23440a54c1754072995438607f: Status 404 returned error can't find the container with id de8aa2386065c69ad8c6cb3b9d6172cc99737a23440a54c1754072995438607f Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.875544 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.915602 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:51 crc kubenswrapper[4903]: I0128 15:45:51.961172 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.000266 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:51.997680 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.015893 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.018426 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.056939 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.096861 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.128996 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.129200 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:45:56.129170233 +0000 UTC m=+28.405141744 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.129297 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.129337 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.129442 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.129483 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:56.129475341 +0000 UTC m=+28.405446852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.129509 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.129656 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:56.129602274 +0000 UTC m=+28.405573815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.135340 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.183378 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.227446 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.229693 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.229874 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.229873 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.230095 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.230221 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.229951 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.230327 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.230343 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.230393 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:56.230379083 +0000 UTC m=+28.506350594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.230615 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:45:56.230601939 +0000 UTC m=+28.506573450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.267317 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.298708 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.334566 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.376324 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.378861 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:29:25.022378214 +0000 UTC Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.415699 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.415699 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.415818 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:45:52 crc kubenswrapper[4903]: E0128 15:45:52.415874 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.425152 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.461912 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.496284 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.538959 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.556771 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xzz6z" event={"ID":"6e8165e7-4fdc-495d-9408-87fca9df790e","Type":"ContainerStarted","Data":"0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979"} Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.556832 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xzz6z" event={"ID":"6e8165e7-4fdc-495d-9408-87fca9df790e","Type":"ContainerStarted","Data":"de8aa2386065c69ad8c6cb3b9d6172cc99737a23440a54c1754072995438607f"} Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.559231 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerStarted","Data":"3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356"} Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.562103 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.562144 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.562156 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.562165 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.577433 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.626344 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.656246 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.697372 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.737428 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.777206 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.815284 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.858578 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.905171 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.937150 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:52 crc kubenswrapper[4903]: I0128 15:45:52.993133 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.014679 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.055227 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.096041 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.142518 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.177731 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.219277 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.258459 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.295737 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.338436 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.374837 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.378963 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:55:01.533288059 +0000 UTC Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.412417 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:53 crc kubenswrapper[4903]: E0128 15:45:53.412570 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.418377 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.566296 4903 generic.go:334] "Generic (PLEG): container finished" podID="0566b7c5-190a-4000-9e3c-ff9d91235ccd" containerID="3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356" exitCode=0 Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.566374 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerDied","Data":"3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356"} Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.570778 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.570854 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.583843 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.601698 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.626709 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.642220 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.654753 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.665824 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.702702 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.733441 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.780828 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.819998 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.827236 4903 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.879820 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.922237 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.957829 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:53 crc kubenswrapper[4903]: I0128 15:45:53.998867 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.038571 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.155651 4903 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.157706 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.157742 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.157751 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.157856 4903 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.164115 4903 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.164508 4903 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.165903 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.165958 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.165976 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.166001 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.166019 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.191173 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.195126 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.195228 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.195249 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.195273 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.195294 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.208555 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.211734 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.211778 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.211790 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.211807 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.211818 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.222673 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.226055 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.226093 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.226105 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.226122 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.226133 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.237390 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.241106 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.241146 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.241158 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.241176 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.241189 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.253176 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.253288 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.254934 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.254978 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.254990 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.255009 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.255021 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.357957 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.358003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.358013 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.358031 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.358046 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.379482 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:17:18.078692215 +0000 UTC Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.412963 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.413015 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.413132 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:45:54 crc kubenswrapper[4903]: E0128 15:45:54.413224 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.464572 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.464606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.464617 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.464632 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.464645 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.567062 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.567100 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.567117 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.567151 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.567168 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.575951 4903 generic.go:334] "Generic (PLEG): container finished" podID="0566b7c5-190a-4000-9e3c-ff9d91235ccd" containerID="1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8" exitCode=0 Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.575997 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerDied","Data":"1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.590895 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.607935 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.630659 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.647955 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.665454 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.669396 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.669433 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.669446 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.669463 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.669474 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.680017 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.695313 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.707759 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.721657 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.730703 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.747884 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.761752 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.770835 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.775741 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.775781 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.775794 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.775811 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.775823 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.781876 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.798305 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.878425 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.878464 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.878475 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.878493 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.878504 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.980298 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.980334 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.980344 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.980358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:54 crc kubenswrapper[4903]: I0128 15:45:54.980369 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:54Z","lastTransitionTime":"2026-01-28T15:45:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.082885 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.082935 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.082949 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.082970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.082987 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.185633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.185670 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.185680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.185695 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.185704 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.266197 4903 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.287918 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.287964 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.287977 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.287999 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.288009 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.380310 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:45:33.186656292 +0000 UTC Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.389913 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.389943 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.389952 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.389965 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.389974 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.412510 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:55 crc kubenswrapper[4903]: E0128 15:45:55.412696 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.493038 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.493071 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.493080 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.493096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.493106 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.586894 4903 generic.go:334] "Generic (PLEG): container finished" podID="0566b7c5-190a-4000-9e3c-ff9d91235ccd" containerID="88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0" exitCode=0 Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.587001 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerDied","Data":"88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.598197 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.599160 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.602489 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.602525 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.602562 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.602575 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.602586 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.603569 4903 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.623146 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.660816 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.692585 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.705301 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.705344 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.705356 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.705371 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.705403 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.706551 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.723223 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.736671 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.749275 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.760721 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.772894 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.806631 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.808173 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.808201 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.808210 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.808226 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.808236 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.817556 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.832149 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.853262 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.868714 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.910944 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.910995 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.911007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.911026 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:55 crc kubenswrapper[4903]: I0128 15:45:55.911040 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:55Z","lastTransitionTime":"2026-01-28T15:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.014263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.014622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.014664 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.014685 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.014698 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.117246 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.117290 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.117308 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.117331 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.117358 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.176347 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.176605 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.176687 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.176826 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.176912 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:04.176885463 +0000 UTC m=+36.452857014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.177412 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:46:04.177392636 +0000 UTC m=+36.453364187 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.177509 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.177600 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:04.177584191 +0000 UTC m=+36.453555732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.220432 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.220503 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.220558 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.220595 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.220621 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.277563 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.277615 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277742 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277761 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277773 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277776 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277829 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277855 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277866 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:04.277851238 +0000 UTC m=+36.553822759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.277940 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:04.277913979 +0000 UTC m=+36.553885540 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.323181 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.323244 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.323262 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.323294 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.323311 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.381407 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:08:34.809450482 +0000 UTC Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.412834 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.412981 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.413040 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:56 crc kubenswrapper[4903]: E0128 15:45:56.413221 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.425876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.425943 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.425962 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.425984 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.425998 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.529029 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.529080 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.529092 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.529114 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.529127 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.612180 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerStarted","Data":"1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.629951 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.631770 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.631799 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.631807 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.631820 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.631830 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.651114 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.662649 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.677676 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.689643 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.702203 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.714945 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.727502 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.734378 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.734460 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.734478 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.734501 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.734517 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.739723 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.752903 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.770358 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.783219 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.803882 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.818650 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.829792 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.837933 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.837994 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.838007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.838025 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.838038 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.941916 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.942055 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.942107 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.942474 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:56 crc kubenswrapper[4903]: I0128 15:45:56.942523 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:56Z","lastTransitionTime":"2026-01-28T15:45:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.045337 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.045375 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.045383 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.045398 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.045409 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.148002 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.148045 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.148059 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.148083 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.148134 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.250970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.251307 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.251320 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.251338 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.251349 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.353843 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.353882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.353893 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.353910 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.353923 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.382131 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:59:32.743096462 +0000 UTC Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.412679 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:57 crc kubenswrapper[4903]: E0128 15:45:57.412805 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.456219 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.456271 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.456283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.456300 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.456725 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.558740 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.558777 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.558787 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.558803 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.558813 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.621030 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.621468 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.621517 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.621700 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.625006 4903 generic.go:334] "Generic (PLEG): container finished" podID="0566b7c5-190a-4000-9e3c-ff9d91235ccd" containerID="1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66" exitCode=0 Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.625046 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerDied","Data":"1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.636032 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.652477 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.655590 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.656825 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.661404 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.661453 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.661470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.661492 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.661508 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.674680 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.688907 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.701603 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.719818 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.751303 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.763957 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.763993 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.764001 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.764018 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.764028 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.765770 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.777633 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.792763 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.804727 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.818350 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.830345 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.842909 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.856275 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.866658 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.866714 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.866731 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.866752 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.866769 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.868033 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.877664 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.887386 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.904395 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.915673 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.934858 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.946216 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.958142 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.969999 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.970038 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.970049 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.970066 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.970078 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:57Z","lastTransitionTime":"2026-01-28T15:45:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.971878 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.984944 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:57 crc kubenswrapper[4903]: I0128 15:45:57.997594 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.007486 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.023836 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.037306 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.052184 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.074605 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.074652 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.074662 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.074678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.074688 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.177155 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.177197 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.177211 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.177272 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.177288 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.279424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.279471 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.279481 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.279498 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.279509 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.381922 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.381973 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.381983 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.381998 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.382008 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.382442 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:33:12.754782949 +0000 UTC Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.412731 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:45:58 crc kubenswrapper[4903]: E0128 15:45:58.412857 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.412931 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:45:58 crc kubenswrapper[4903]: E0128 15:45:58.413072 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.425522 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.446895 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.461500 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.475870 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.483727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.483751 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.483759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.483772 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.483781 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.494216 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.504987 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.516590 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.527554 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.540362 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.550465 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.561975 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.571623 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.585758 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.585799 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.585810 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.585827 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.585839 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.591984 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.604785 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.614749 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.633694 4903 generic.go:334] "Generic (PLEG): container finished" podID="0566b7c5-190a-4000-9e3c-ff9d91235ccd" containerID="d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026" exitCode=0 Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.633744 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerDied","Data":"d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.657145 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.677359 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.689039 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.689091 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.689103 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.689120 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.689131 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.689584 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.704153 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.724055 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.738970 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.751235 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.766013 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.780150 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.792489 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.792544 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.792557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.792617 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.792629 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.794117 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.808372 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.822339 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.833928 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.842985 4903 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.845869 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.856461 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.895550 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.895634 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.895653 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.895670 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.895682 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.998357 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.998392 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.998400 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.998414 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:58 crc kubenswrapper[4903]: I0128 15:45:58.998424 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:58Z","lastTransitionTime":"2026-01-28T15:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.100753 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.100821 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.100843 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.100873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.100894 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.203072 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.203129 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.203148 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.203172 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.203186 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.306422 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.306503 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.306578 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.306629 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.306654 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.382832 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:12:36.83363517 +0000 UTC Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.408989 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.409025 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.409035 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.409055 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.409066 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.412498 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:45:59 crc kubenswrapper[4903]: E0128 15:45:59.412662 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.511728 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.511763 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.511771 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.511791 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.511800 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.615048 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.615115 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.615133 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.615160 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.615177 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.644149 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" event={"ID":"0566b7c5-190a-4000-9e3c-ff9d91235ccd","Type":"ContainerStarted","Data":"5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.664203 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.681919 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.694593 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.708426 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.717799 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.717843 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.717854 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.717877 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.717892 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.729190 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.752490 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.766107 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.788185 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.803461 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.818087 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.825301 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.825343 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.825355 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.825375 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.825389 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.843843 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.873225 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.891857 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.911041 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.928112 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.928152 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.928165 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.928184 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.928196 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:45:59Z","lastTransitionTime":"2026-01-28T15:45:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:45:59 crc kubenswrapper[4903]: I0128 15:45:59.938335 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:45:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.030648 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.030691 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.030703 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.030720 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.030732 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.132915 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.132956 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.132969 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.132985 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.132997 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.235445 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.235474 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.235482 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.235496 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.235504 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.338060 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.338099 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.338109 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.338123 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.338132 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.383553 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:36:36.243113151 +0000 UTC Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.415580 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:00 crc kubenswrapper[4903]: E0128 15:46:00.416194 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.416675 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:00 crc kubenswrapper[4903]: E0128 15:46:00.416756 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.441020 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.441052 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.441065 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.441084 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.441095 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.543907 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.543966 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.543980 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.544001 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.544015 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.646173 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.646210 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.646219 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.646234 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.646243 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.748816 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.748859 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.748869 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.748886 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.748896 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.851436 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.851482 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.851493 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.851511 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.851524 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.954857 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.954913 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.954927 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.954944 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:00 crc kubenswrapper[4903]: I0128 15:46:00.954954 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:00Z","lastTransitionTime":"2026-01-28T15:46:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.057588 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.057627 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.057635 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.057649 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.057657 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.160509 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.160557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.160566 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.160580 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.160589 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.262298 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.262341 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.262357 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.262372 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.262383 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.365113 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.365147 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.365164 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.365179 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.365189 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.384071 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:03:07.704770478 +0000 UTC Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.412613 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:01 crc kubenswrapper[4903]: E0128 15:46:01.412734 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.468185 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.468243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.468267 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.468296 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.468321 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.572121 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.572194 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.572218 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.572250 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.572267 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.652033 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/0.log" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.654402 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7" exitCode=1 Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.654446 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.655107 4903 scope.go:117] "RemoveContainer" containerID="5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.661523 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.669952 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.674587 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.674619 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.674628 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.674642 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.674652 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.685076 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.696783 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.705370 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.733211 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.746785 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.770190 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.777105 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.777162 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.777177 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.777199 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.777213 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.784485 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.797018 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.807356 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.817194 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.828431 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.838327 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.851223 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.864031 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.878705 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.880083 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.880137 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.880153 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.880176 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.880192 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.895252 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.909013 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.929763 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.948244 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.968863 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.982933 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.982984 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.982996 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.983019 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.983032 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:01Z","lastTransitionTime":"2026-01-28T15:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.985442 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:01 crc kubenswrapper[4903]: I0128 15:46:01.994945 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.007988 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.017676 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.036699 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.049808 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.058636 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.069799 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.085220 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.085263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.085272 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.085287 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.085300 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.089830 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.188576 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.188643 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.188662 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.188689 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.188712 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.292179 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.292243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.292262 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.292287 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.292303 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.331867 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz"] Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.332524 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.335233 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.335387 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.360428 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.374432 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.385061 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:59:44.663375375 +0000 UTC Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.386959 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.394677 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.394734 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.394761 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.394788 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.394806 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.403022 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.413281 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.413319 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:02 crc kubenswrapper[4903]: E0128 15:46:02.413509 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:02 crc kubenswrapper[4903]: E0128 15:46:02.413626 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.438978 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.441393 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd494893-bf26-4c20-a223-cea43bdcb107-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.441443 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd494893-bf26-4c20-a223-cea43bdcb107-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.441492 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd494893-bf26-4c20-a223-cea43bdcb107-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.441521 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br8b\" (UniqueName: \"kubernetes.io/projected/bd494893-bf26-4c20-a223-cea43bdcb107-kube-api-access-4br8b\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.455213 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.471785 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.484037 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.498483 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.498557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.498575 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.498600 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.498619 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.507418 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.524308 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.542258 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.542955 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd494893-bf26-4c20-a223-cea43bdcb107-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.542998 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd494893-bf26-4c20-a223-cea43bdcb107-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.543048 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd494893-bf26-4c20-a223-cea43bdcb107-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.543073 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4br8b\" (UniqueName: \"kubernetes.io/projected/bd494893-bf26-4c20-a223-cea43bdcb107-kube-api-access-4br8b\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.544189 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd494893-bf26-4c20-a223-cea43bdcb107-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.544303 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd494893-bf26-4c20-a223-cea43bdcb107-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.549501 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd494893-bf26-4c20-a223-cea43bdcb107-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.561222 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.564387 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4br8b\" (UniqueName: \"kubernetes.io/projected/bd494893-bf26-4c20-a223-cea43bdcb107-kube-api-access-4br8b\") pod \"ovnkube-control-plane-749d76644c-4w7fz\" (UID: \"bd494893-bf26-4c20-a223-cea43bdcb107\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.577844 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.589472 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.601333 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.601347 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.601383 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.601498 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.601515 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.601547 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.613252 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.653720 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" Jan 28 15:46:02 crc kubenswrapper[4903]: W0128 15:46:02.668593 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd494893_bf26_4c20_a223_cea43bdcb107.slice/crio-7650202b4f2097474337b11284807a07e0c0d323160ba149316775c5b43ec7ce WatchSource:0}: Error finding container 7650202b4f2097474337b11284807a07e0c0d323160ba149316775c5b43ec7ce: Status 404 returned error can't find the container with id 7650202b4f2097474337b11284807a07e0c0d323160ba149316775c5b43ec7ce Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.706409 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.706470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.706489 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.706515 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.706572 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.809286 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.809325 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.809336 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.809352 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.809363 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.912973 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.913212 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.913220 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.913235 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:02 crc kubenswrapper[4903]: I0128 15:46:02.913244 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:02Z","lastTransitionTime":"2026-01-28T15:46:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.016411 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.016457 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.016470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.016550 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.016564 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.119716 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.119765 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.119779 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.119797 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.119811 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.221630 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.221658 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.221666 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.221680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.221688 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.324552 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.324593 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.324604 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.324622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.324637 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.385856 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:55:17.543397335 +0000 UTC Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.412715 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:03 crc kubenswrapper[4903]: E0128 15:46:03.412930 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.427049 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.427089 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.427102 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.427119 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.427130 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.440590 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kq2bn"] Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.441421 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:03 crc kubenswrapper[4903]: E0128 15:46:03.441560 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.454696 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.467255 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.487270 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.502861 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.519942 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.529556 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.529591 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.529602 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.529622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.529634 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.535463 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.546637 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.552273 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.552487 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqkt\" (UniqueName: \"kubernetes.io/projected/90b23d2e-fec0-494c-9a60-461cc16fe0ae-kube-api-access-4cqkt\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.563295 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.581169 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.602212 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.622889 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.632732 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.632765 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.632775 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.632789 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.632799 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.642408 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.654024 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:03 crc kubenswrapper[4903]: E0128 15:46:03.654194 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.654382 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: E0128 15:46:03.654427 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:46:04.154402506 +0000 UTC m=+36.430374047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.654314 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqkt\" (UniqueName: \"kubernetes.io/projected/90b23d2e-fec0-494c-9a60-461cc16fe0ae-kube-api-access-4cqkt\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.666505 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" event={"ID":"bd494893-bf26-4c20-a223-cea43bdcb107","Type":"ContainerStarted","Data":"7650202b4f2097474337b11284807a07e0c0d323160ba149316775c5b43ec7ce"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.669058 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/0.log" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.672077 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.672438 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.672927 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.675737 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqkt\" (UniqueName: \"kubernetes.io/projected/90b23d2e-fec0-494c-9a60-461cc16fe0ae-kube-api-access-4cqkt\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.699789 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.713110 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.723031 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.734873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.734915 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.734927 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.734945 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.734959 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.738861 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.751664 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.764323 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.787619 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.803978 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.825516 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.836915 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.836967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.836984 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.837002 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.837013 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.842148 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.859779 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.873134 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.886899 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.899544 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.913625 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.928155 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.939349 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.939401 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.939413 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.939435 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.939450 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:03Z","lastTransitionTime":"2026-01-28T15:46:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.948862 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.963681 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.975482 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:03 crc kubenswrapper[4903]: I0128 15:46:03.988451 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.042042 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.042134 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.042158 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.042189 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.042215 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.144769 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.144829 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.144842 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.144865 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.144884 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.159491 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.159751 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.159859 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:46:05.159833644 +0000 UTC m=+37.435805165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.246447 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.246486 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.246499 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.246517 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.246552 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.260201 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.260344 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:46:20.260325676 +0000 UTC m=+52.536297187 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.260378 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.260405 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.260522 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.260579 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:20.260571452 +0000 UTC m=+52.536542963 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.260634 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.260782 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:20.260755457 +0000 UTC m=+52.536727018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.334706 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.334758 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.334772 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.334791 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.334803 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.352050 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.355505 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.355587 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.355604 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.355624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.355637 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.361308 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.361361 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361506 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361555 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361569 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361611 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:20.361596418 +0000 UTC m=+52.637567929 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361896 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361925 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361936 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.361968 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:20.361958168 +0000 UTC m=+52.637929679 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.369208 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.372233 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.372260 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.372268 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.372283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.372292 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.385396 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.386298 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:48:33.54637998 +0000 UTC Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.388673 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.388718 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.388735 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.388761 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.388778 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.403286 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.407241 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.407280 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.407289 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.407303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.407312 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.413213 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.413678 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.413688 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.415680 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.420329 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: E0128 15:46:04.420599 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.422190 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.422224 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.422234 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.422255 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.422265 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.525571 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.525618 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.525632 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.525652 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.525665 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.629793 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.629839 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.629854 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.629876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.629891 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.678679 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" event={"ID":"bd494893-bf26-4c20-a223-cea43bdcb107","Type":"ContainerStarted","Data":"b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.678791 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" event={"ID":"bd494893-bf26-4c20-a223-cea43bdcb107","Type":"ContainerStarted","Data":"dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.705887 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.732580 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.732924 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.733079 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.733202 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.733392 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.736497 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.753227 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.768136 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.781495 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.792253 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.803679 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.813893 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.823205 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.835574 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.838162 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.838219 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.838235 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.838253 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.838271 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.853717 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.868306 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.880222 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.895355 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.905740 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.918036 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.938389 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.941779 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.941823 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.941835 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.941860 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:04 crc kubenswrapper[4903]: I0128 15:46:04.941871 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:04Z","lastTransitionTime":"2026-01-28T15:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.044516 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.044600 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.044616 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.044647 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.044664 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.147794 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.147845 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.147860 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.147876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.147887 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.169982 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:05 crc kubenswrapper[4903]: E0128 15:46:05.170165 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:05 crc kubenswrapper[4903]: E0128 15:46:05.170259 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:46:07.170239517 +0000 UTC m=+39.446211028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.251093 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.251127 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.251137 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.251155 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.251166 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.353770 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.353909 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.353981 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.354056 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.354123 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.387416 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:38:52.6651546 +0000 UTC Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.412746 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.412767 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:05 crc kubenswrapper[4903]: E0128 15:46:05.412880 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:05 crc kubenswrapper[4903]: E0128 15:46:05.412990 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.457542 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.457594 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.457606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.457644 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.457657 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.560106 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.560145 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.560155 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.560171 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.560183 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.662715 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.662751 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.662760 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.662777 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.662793 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.683011 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/1.log" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.683822 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/0.log" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.686274 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9" exitCode=1 Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.686345 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.686407 4903 scope.go:117] "RemoveContainer" containerID="5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.687018 4903 scope.go:117] "RemoveContainer" containerID="84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9" Jan 28 15:46:05 crc kubenswrapper[4903]: E0128 15:46:05.687157 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.702652 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.714073 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.736625 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.751005 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.762068 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.764816 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.764855 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.764870 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.764886 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.764895 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.777313 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.794950 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.812849 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:05Z\\\",\\\"message\\\":\\\"ailed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:46:05.418768 6341 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.825646 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.842629 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.856609 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.867849 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.867925 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.867938 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.867966 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.867981 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.872710 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.892998 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.914099 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.939836 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.956956 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.970105 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.970158 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.970170 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.970186 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.970195 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:05Z","lastTransitionTime":"2026-01-28T15:46:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:05 crc kubenswrapper[4903]: I0128 15:46:05.972913 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.072952 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.073008 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.073031 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.073056 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.073074 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.175942 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.176427 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.176511 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.176614 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.176698 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.280584 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.280650 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.280662 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.280681 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.280692 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.383703 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.383747 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.383759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.383775 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.383787 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.388081 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:20:01.27597249 +0000 UTC Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.412377 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.412459 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:06 crc kubenswrapper[4903]: E0128 15:46:06.412572 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:06 crc kubenswrapper[4903]: E0128 15:46:06.412708 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.485835 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.485885 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.485898 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.485913 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.485924 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.588960 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.589225 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.589328 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.589424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.589509 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.691553 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/1.log" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.691903 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.691967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.691992 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.692024 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.692049 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.795938 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.795970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.795979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.795993 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.796005 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.899126 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.899178 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.899194 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.899218 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:06 crc kubenswrapper[4903]: I0128 15:46:06.899235 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:06Z","lastTransitionTime":"2026-01-28T15:46:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.001736 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.001800 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.001814 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.001830 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.001842 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.104554 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.104597 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.104627 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.104644 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.104654 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.191571 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:07 crc kubenswrapper[4903]: E0128 15:46:07.191777 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:07 crc kubenswrapper[4903]: E0128 15:46:07.191880 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:46:11.191860292 +0000 UTC m=+43.467831803 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.207617 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.207658 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.207667 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.207681 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.207691 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.310660 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.310707 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.310717 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.310745 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.310762 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.389102 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:18:45.178329914 +0000 UTC Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.412453 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.412521 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:07 crc kubenswrapper[4903]: E0128 15:46:07.412619 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:07 crc kubenswrapper[4903]: E0128 15:46:07.412765 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.414687 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.414721 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.414730 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.414746 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.414758 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.517449 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.517749 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.517830 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.517914 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.517988 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.620463 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.620506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.620515 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.620557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.620568 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.723100 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.723150 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.723162 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.723181 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.723196 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.825822 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.825879 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.825891 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.825911 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.825924 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.933475 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.933517 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.933560 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.933588 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:07 crc kubenswrapper[4903]: I0128 15:46:07.933606 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:07Z","lastTransitionTime":"2026-01-28T15:46:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.036695 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.037275 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.037289 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.037319 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.037338 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.139935 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.139990 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.140000 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.140019 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.140030 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.242234 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.242331 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.242341 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.242358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.242368 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.345931 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.345989 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.346003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.346028 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.346046 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.389566 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:10:16.606920463 +0000 UTC Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.413342 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.413478 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:08 crc kubenswrapper[4903]: E0128 15:46:08.413651 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:08 crc kubenswrapper[4903]: E0128 15:46:08.413711 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.427288 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.446323 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7768c81424036c4c754888019090372deaaf9148ff98edb840a05ee179fbd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:00Z\\\",\\\"message\\\":\\\" 15:46:00.542152 6174 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:46:00.542177 6174 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 15:46:00.542182 6174 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:46:00.542214 6174 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:46:00.545995 6174 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:00.546015 6174 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:00.546030 6174 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:46:00.546068 6174 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:00.546054 6174 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:00.546145 6174 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:46:00.546169 6174 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:46:00.546176 6174 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:46:00.546194 6174 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:46:00.546239 6174 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:00.546246 6174 factory.go:656] Stopping watch factory\\\\nI0128 15:46:00.546264 6174 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:05Z\\\",\\\"message\\\":\\\"ailed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:46:05.418768 6341 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.448543 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.448573 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.448584 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.448601 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.448613 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.460842 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.473200 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.488367 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.503310 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.514929 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.526239 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.538581 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.548955 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.551155 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.551224 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.551245 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.551275 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.551294 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.566830 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.585021 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.601870 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.623610 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.637136 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.647728 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.654633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.654714 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.654762 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.654785 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.654797 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.657279 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.758027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.758086 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.758096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.758114 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.758124 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.860666 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.860700 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.860708 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.860723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.860732 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.963607 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.963661 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.963673 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.963694 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:08 crc kubenswrapper[4903]: I0128 15:46:08.963711 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:08Z","lastTransitionTime":"2026-01-28T15:46:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.067186 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.067239 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.067248 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.067270 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.067281 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.170201 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.170243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.170258 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.170278 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.170292 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.273296 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.273707 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.273901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.274121 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.274260 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.377318 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.377690 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.377815 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.377918 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.378015 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.390805 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:59:17.791158726 +0000 UTC Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.412619 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:09 crc kubenswrapper[4903]: E0128 15:46:09.412979 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.412687 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:09 crc kubenswrapper[4903]: E0128 15:46:09.413831 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.480958 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.480989 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.480997 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.481011 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.481021 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.583773 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.583824 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.583840 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.583856 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.583867 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.686018 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.686064 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.686076 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.686094 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.686104 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.789817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.789865 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.789880 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.789901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.789912 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.893226 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.893308 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.893333 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.893362 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.893380 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.996473 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.996548 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.996562 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.996588 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:09 crc kubenswrapper[4903]: I0128 15:46:09.996603 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:09Z","lastTransitionTime":"2026-01-28T15:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.099955 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.100004 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.100018 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.100057 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.100072 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.203456 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.203521 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.203621 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.203655 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.203672 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.306657 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.306720 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.306743 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.306772 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.306798 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.391485 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:54:12.890481818 +0000 UTC Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.409656 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.409715 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.409733 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.409756 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.409773 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.413064 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:10 crc kubenswrapper[4903]: E0128 15:46:10.413222 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.413350 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:10 crc kubenswrapper[4903]: E0128 15:46:10.413604 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.513211 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.513275 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.513295 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.513323 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.513347 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.616783 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.616882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.616906 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.616939 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.616965 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.720123 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.720214 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.720233 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.720265 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.720287 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.823281 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.823337 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.823355 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.823375 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.823387 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.925774 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.925820 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.925831 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.925848 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:10 crc kubenswrapper[4903]: I0128 15:46:10.925862 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:10Z","lastTransitionTime":"2026-01-28T15:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.028346 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.028411 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.028427 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.028447 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.028460 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.131464 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.131513 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.131546 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.131572 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.131592 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.235305 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.235367 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.235386 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.235664 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.236049 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.236629 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:11 crc kubenswrapper[4903]: E0128 15:46:11.236967 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:11 crc kubenswrapper[4903]: E0128 15:46:11.237251 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:46:19.237089372 +0000 UTC m=+51.513060903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.338468 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.338513 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.338556 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.338576 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.338588 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.392244 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:56:50.656765367 +0000 UTC Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.412603 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.412626 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:11 crc kubenswrapper[4903]: E0128 15:46:11.413167 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:11 crc kubenswrapper[4903]: E0128 15:46:11.413265 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.440792 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.440835 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.440846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.440862 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.440872 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.543266 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.543305 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.543317 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.543334 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.543346 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.647860 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.647916 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.647930 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.647948 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.647959 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.750631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.750680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.750693 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.750711 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.750721 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.853924 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.853967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.853979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.853995 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.854007 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.957370 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.957448 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.957463 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.957479 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:11 crc kubenswrapper[4903]: I0128 15:46:11.957489 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:11Z","lastTransitionTime":"2026-01-28T15:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.060387 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.060432 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.060443 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.060461 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.060475 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.164013 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.164438 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.164682 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.164845 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.164980 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.267929 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.267972 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.267980 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.267997 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.268007 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.371147 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.371188 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.371196 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.371212 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.371223 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.393233 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:22:05.146553424 +0000 UTC Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.412932 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.413023 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:12 crc kubenswrapper[4903]: E0128 15:46:12.413205 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:12 crc kubenswrapper[4903]: E0128 15:46:12.413374 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.474558 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.474599 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.474611 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.474650 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.474664 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.577728 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.577779 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.577795 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.577817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.577835 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.679886 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.680180 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.680394 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.680436 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.680452 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.783414 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.783697 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.783776 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.783846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.783903 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.886789 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.886839 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.886853 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.886872 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.886884 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.989417 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.989459 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.989471 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.989488 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:12 crc kubenswrapper[4903]: I0128 15:46:12.989500 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:12Z","lastTransitionTime":"2026-01-28T15:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.092950 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.093051 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.093101 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.093128 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.093146 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.196446 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.196478 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.196519 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.196544 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.196553 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.299193 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.299705 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.299817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.299851 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.299868 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.394162 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:20:30.138122692 +0000 UTC Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.402650 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.402702 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.402715 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.402738 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.402753 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.413454 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.413506 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:13 crc kubenswrapper[4903]: E0128 15:46:13.413673 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:13 crc kubenswrapper[4903]: E0128 15:46:13.413796 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.505141 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.505198 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.505210 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.505231 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.505244 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.607655 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.607700 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.607711 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.607729 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.607741 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.709660 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.709691 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.709701 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.709738 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.709751 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.813126 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.813175 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.813184 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.813200 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.813210 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.916682 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.916730 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.916742 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.916759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:13 crc kubenswrapper[4903]: I0128 15:46:13.916773 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:13Z","lastTransitionTime":"2026-01-28T15:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.019129 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.019403 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.019467 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.019565 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.019680 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.122177 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.122223 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.122232 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.122246 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.122257 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.225352 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.225401 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.225412 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.225430 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.225448 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.327508 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.327574 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.327585 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.327601 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.327611 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.394582 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:00:20.256299845 +0000 UTC Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.413393 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.413455 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.413663 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.413800 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.430272 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.430318 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.430329 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.430347 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.430358 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.533613 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.533673 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.533695 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.533719 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.533731 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.627410 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.627736 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.627815 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.627916 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.627994 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.643801 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.648495 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.648736 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.648873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.649050 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.649181 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.663258 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.672645 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.672724 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.672748 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.672774 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.672793 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.688610 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.694002 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.694043 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.694061 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.694083 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.694100 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.708716 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.712962 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.712995 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.713007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.713020 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.713031 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.727499 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:14 crc kubenswrapper[4903]: E0128 15:46:14.727645 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.729637 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.729686 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.729702 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.729728 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.729747 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.832495 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.832579 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.832595 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.832618 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.832633 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.936035 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.936094 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.936110 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.936136 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:14 crc kubenswrapper[4903]: I0128 15:46:14.936152 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:14Z","lastTransitionTime":"2026-01-28T15:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.039331 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.039522 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.039622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.039723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.039825 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.143455 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.143506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.143560 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.143589 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.143673 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.265671 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.265719 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.265730 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.265753 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.265763 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.369685 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.369750 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.369773 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.369803 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.369827 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.395148 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:47:47.589952086 +0000 UTC Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.412906 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.412906 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:15 crc kubenswrapper[4903]: E0128 15:46:15.413135 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:15 crc kubenswrapper[4903]: E0128 15:46:15.413285 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.473042 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.473108 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.473127 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.473154 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.473173 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.576581 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.576661 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.576686 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.576718 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.576745 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.680167 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.680238 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.680256 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.680281 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.680298 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.783563 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.783815 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.783933 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.784001 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.784064 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.886399 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.886435 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.886447 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.886463 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.886478 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.989478 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.989518 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.989560 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.989578 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:15 crc kubenswrapper[4903]: I0128 15:46:15.989590 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:15Z","lastTransitionTime":"2026-01-28T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.092743 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.092786 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.092799 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.092825 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.092837 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.195813 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.195857 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.195867 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.195887 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.195898 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.297872 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.297909 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.297918 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.297931 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.297950 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.396052 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:26:19.291363684 +0000 UTC Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.400155 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.400183 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.400190 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.400203 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.400211 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.413730 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:16 crc kubenswrapper[4903]: E0128 15:46:16.413890 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.413917 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:16 crc kubenswrapper[4903]: E0128 15:46:16.414061 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.501896 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.501927 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.501936 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.501949 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.501961 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.604553 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.604613 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.604626 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.604646 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.604660 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.707377 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.707422 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.707432 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.707449 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.707459 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.809714 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.809744 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.809752 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.809765 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.809774 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.912217 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.912281 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.912298 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.912323 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:16 crc kubenswrapper[4903]: I0128 15:46:16.912339 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:16Z","lastTransitionTime":"2026-01-28T15:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.017252 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.017304 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.017315 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.017332 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.017344 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.121007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.121058 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.121070 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.121090 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.121102 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.223894 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.223936 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.223947 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.223967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.223979 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.327378 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.327443 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.327455 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.327492 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.327505 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.397103 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:27:15.611476212 +0000 UTC Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.412875 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.412875 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:17 crc kubenswrapper[4903]: E0128 15:46:17.413289 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.413438 4903 scope.go:117] "RemoveContainer" containerID="84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9" Jan 28 15:46:17 crc kubenswrapper[4903]: E0128 15:46:17.413451 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.429474 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.429509 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.429519 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.429551 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.429562 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.429665 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.459409 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:05Z\\\",\\\"message\\\":\\\"ailed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:46:05.418768 6341 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.474915 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.488377 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.506779 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.521568 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.531963 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.532007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.532020 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.532037 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.532050 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.534396 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.548313 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.560954 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.574841 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.589175 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.603819 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.614357 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.631509 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.634318 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.634364 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.634379 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.634399 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.634411 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.648683 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.661467 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.673582 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.777476 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.777518 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.777549 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.777567 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.777577 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.780733 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/1.log" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.783475 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.783951 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.800994 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.812083 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.820241 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.830413 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.840298 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.857805 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:05Z\\\",\\\"message\\\":\\\"ailed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:46:05.418768 6341 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.872828 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.879917 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.879962 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.879973 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.879990 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.880002 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.882451 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.896173 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.914644 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.927929 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.942755 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.959306 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.973643 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.986764 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:17Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.988388 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.988807 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.988817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.988834 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:17 crc kubenswrapper[4903]: I0128 15:46:17.988843 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:17Z","lastTransitionTime":"2026-01-28T15:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.004064 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.015852 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.091186 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.091227 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.091236 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.091251 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.091259 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.194106 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.194150 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.194160 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.194177 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.194186 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.297271 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.297312 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.297322 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.297336 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.297346 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.397604 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:50:10.41075418 +0000 UTC Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.399029 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.399062 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.399076 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.399092 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.399103 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.412665 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:18 crc kubenswrapper[4903]: E0128 15:46:18.412869 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.413199 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:18 crc kubenswrapper[4903]: E0128 15:46:18.413332 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.430422 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.444389 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.455739 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.468831 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.482867 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.501295 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.501346 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.501362 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.501379 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.501392 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.501913 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.519079 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.533233 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.543826 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.552824 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.568944 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.580476 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.589288 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.598220 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.603394 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.603634 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.603645 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.603659 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.603669 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.607556 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.625495 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:05Z\\\",\\\"message\\\":\\\"ailed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:46:05.418768 6341 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.635576 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.706442 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.706500 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.706509 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.706524 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.706558 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.789032 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/2.log" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.789961 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/1.log" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.795397 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566" exitCode=1 Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.795517 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.795670 4903 scope.go:117] "RemoveContainer" containerID="84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.797006 4903 scope.go:117] "RemoveContainer" containerID="8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566" Jan 28 15:46:18 crc kubenswrapper[4903]: E0128 15:46:18.797363 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.808781 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.808813 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.808823 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.808840 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.808851 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.813569 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.827349 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.839440 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.853350 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.864313 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.875697 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.897272 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.911444 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.911490 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.911504 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.911524 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.911554 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:18Z","lastTransitionTime":"2026-01-28T15:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.913406 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.931863 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.945966 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.963479 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.976986 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:18 crc kubenswrapper[4903]: I0128 15:46:18.989961 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.015013 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.015071 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.015082 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.015099 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.015119 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.015438 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.031122 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.052002 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f01a050552dde06b4b36b0350ccb8e5dafe8dde1f448fcf7f86d36034f2ee9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:05Z\\\",\\\"message\\\":\\\"ailed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:05Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:46:05.418768 6341 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.066141 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.118358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.118440 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.118452 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.118477 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.118492 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.221393 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.221473 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.221492 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.221519 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.221567 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.325130 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.325204 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.325227 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.325263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.325288 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.334916 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:19 crc kubenswrapper[4903]: E0128 15:46:19.335154 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:19 crc kubenswrapper[4903]: E0128 15:46:19.335229 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:46:35.335206809 +0000 UTC m=+67.611178330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.398092 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:48:52.404702815 +0000 UTC Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.412477 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.412669 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:19 crc kubenswrapper[4903]: E0128 15:46:19.412688 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:19 crc kubenswrapper[4903]: E0128 15:46:19.412745 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.428650 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.428707 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.428726 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.428759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.428778 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.532102 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.532160 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.532176 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.532201 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.532219 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.635521 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.635609 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.635624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.635651 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.635667 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.739617 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.739709 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.739727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.739753 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.739771 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.802619 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/2.log" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.809190 4903 scope.go:117] "RemoveContainer" containerID="8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566" Jan 28 15:46:19 crc kubenswrapper[4903]: E0128 15:46:19.809561 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.827611 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.841887 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.844967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.845007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.845022 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.845041 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.845053 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.855352 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.888000 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.902877 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.931258 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.944696 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.948647 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.948893 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.949040 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.949222 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.949337 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:19Z","lastTransitionTime":"2026-01-28T15:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.957871 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.969596 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.981006 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:19 crc kubenswrapper[4903]: I0128 15:46:19.993650 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.006399 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.017472 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.035325 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.049346 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.052573 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.052631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.052644 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.052674 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.052689 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.063190 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.074981 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.155177 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.155208 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.155217 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.155232 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.155241 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.257745 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.257802 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.257819 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.257846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.257863 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.347645 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.347766 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.347794 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:46:52.347767956 +0000 UTC m=+84.623739467 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.347839 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.347886 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.347946 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:52.34792976 +0000 UTC m=+84.623901281 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.347991 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.348023 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:52.348016882 +0000 UTC m=+84.623988393 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.360440 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.360515 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.360564 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.360593 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.360611 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.398925 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:21:31.063961661 +0000 UTC Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.412685 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.412713 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.412834 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.413103 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.448843 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.448949 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449152 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449177 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449192 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449152 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449282 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449300 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449255 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:52.449234702 +0000 UTC m=+84.725206203 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:20 crc kubenswrapper[4903]: E0128 15:46:20.449393 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:46:52.449368436 +0000 UTC m=+84.725340157 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.463165 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.463211 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.463222 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.463241 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.463255 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.566204 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.566268 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.566282 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.566303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.566318 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.670121 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.670164 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.670175 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.670193 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.670207 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.701305 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.714755 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.726757 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.741090 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.752124 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.763459 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.772107 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.772138 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.772147 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.772160 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.772169 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.778117 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.796092 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.806242 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.816563 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.828065 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.838051 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.852350 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.864991 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.874587 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.874631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.874644 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.874663 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.874676 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.879436 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.892650 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.904992 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.918520 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.930945 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:20Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.976982 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.977037 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.977054 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.977076 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:20 crc kubenswrapper[4903]: I0128 15:46:20.977090 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:20Z","lastTransitionTime":"2026-01-28T15:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.080065 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.080104 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.080117 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.080136 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.080148 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.183007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.183085 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.183109 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.183144 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.183167 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.286867 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.286945 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.286970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.287008 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.287029 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.390488 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.390629 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.390647 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.390675 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.390687 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.399169 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:53:11.738573339 +0000 UTC Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.412597 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.412733 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:21 crc kubenswrapper[4903]: E0128 15:46:21.412736 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:21 crc kubenswrapper[4903]: E0128 15:46:21.412911 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.493907 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.493976 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.493998 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.494027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.494048 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.599832 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.599869 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.599882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.599901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.599914 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.704062 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.704108 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.704126 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.704147 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.704163 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.807393 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.807423 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.807431 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.807443 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.807452 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.909209 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.909240 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.909250 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.909265 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:21 crc kubenswrapper[4903]: I0128 15:46:21.909276 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:21Z","lastTransitionTime":"2026-01-28T15:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.011722 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.011771 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.011781 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.011796 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.011806 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.114561 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.114599 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.114608 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.114622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.114632 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.218021 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.218121 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.218144 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.218174 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.218196 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.321825 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.321879 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.321937 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.321956 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.321970 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.399660 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:37:30.713068875 +0000 UTC Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.413374 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.413398 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:22 crc kubenswrapper[4903]: E0128 15:46:22.413496 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:22 crc kubenswrapper[4903]: E0128 15:46:22.413725 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.423699 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.423723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.423732 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.423744 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.423754 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.527525 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.527665 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.527705 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.527739 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.527765 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.631343 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.631479 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.631514 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.631608 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.631650 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.734408 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.734581 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.734599 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.734616 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.734629 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.838326 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.838388 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.838409 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.838435 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.838454 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.942782 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.942854 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.942873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.942898 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:22 crc kubenswrapper[4903]: I0128 15:46:22.942916 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:22Z","lastTransitionTime":"2026-01-28T15:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.046487 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.046592 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.046615 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.046645 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.046674 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.149517 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.149633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.149656 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.149686 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.149708 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.252876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.252951 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.252972 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.253004 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.253022 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.355982 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.356247 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.356339 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.356470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.356600 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.399965 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:13:50.134504049 +0000 UTC Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.413369 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.413372 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:23 crc kubenswrapper[4903]: E0128 15:46:23.413580 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:23 crc kubenswrapper[4903]: E0128 15:46:23.413703 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.460320 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.460684 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.460852 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.460972 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.461090 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.563216 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.563279 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.563294 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.563318 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.563333 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.666008 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.666064 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.666079 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.666096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.666107 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.769081 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.769123 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.769132 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.769147 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.769158 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.872000 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.872072 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.872090 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.872115 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.872132 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.974460 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.974500 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.974513 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.974557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:23 crc kubenswrapper[4903]: I0128 15:46:23.974581 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:23Z","lastTransitionTime":"2026-01-28T15:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.077283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.077341 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.077358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.077378 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.077394 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.181415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.181471 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.181487 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.181511 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.181550 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.283831 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.283874 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.283885 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.283902 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.283913 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.387324 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.387385 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.387404 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.387428 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.387442 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.401151 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:02:35.11218157 +0000 UTC Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.412685 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.412763 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:24 crc kubenswrapper[4903]: E0128 15:46:24.412869 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:24 crc kubenswrapper[4903]: E0128 15:46:24.413147 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.491308 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.491384 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.491417 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.491448 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.491469 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.594498 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.594563 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.594576 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.594594 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.594606 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.696652 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.696704 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.696715 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.696731 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.696744 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.798908 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.798976 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.798993 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.799017 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.799036 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.901829 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.901918 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.901958 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.901995 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:24 crc kubenswrapper[4903]: I0128 15:46:24.902016 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:24Z","lastTransitionTime":"2026-01-28T15:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.004617 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.004668 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.004680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.004697 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.004708 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.101282 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.101365 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.101390 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.101423 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.101444 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.124042 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.129051 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.129108 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.129127 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.129153 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.129170 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.151660 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.156602 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.156653 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.156673 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.156698 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.156716 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.172983 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.177292 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.177364 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.177380 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.177399 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.177411 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.197300 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.201928 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.202004 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.202025 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.202053 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.202071 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.217489 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:25Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.217689 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.219404 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.219486 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.219502 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.219545 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.219563 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.322552 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.322622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.322642 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.322674 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.322695 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.401473 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:35:00.43370202 +0000 UTC Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.413230 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.413337 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.413414 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:25 crc kubenswrapper[4903]: E0128 15:46:25.413585 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.426423 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.426485 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.426500 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.426522 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.426593 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.529262 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.529298 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.529309 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.529327 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.529339 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.632522 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.632598 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.632613 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.632638 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.632650 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.736619 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.736660 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.736674 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.736692 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.736705 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.838737 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.838813 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.838827 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.838846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.838890 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.941873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.941968 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.941986 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.942015 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:25 crc kubenswrapper[4903]: I0128 15:46:25.942032 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:25Z","lastTransitionTime":"2026-01-28T15:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.044151 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.044202 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.044220 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.044236 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.044245 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.146705 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.146749 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.146761 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.146788 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.146806 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.249934 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.250017 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.250043 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.250072 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.250091 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.353302 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.353361 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.353379 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.353404 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.353420 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.401943 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:28:44.875753121 +0000 UTC Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.412837 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.412837 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:26 crc kubenswrapper[4903]: E0128 15:46:26.413128 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:26 crc kubenswrapper[4903]: E0128 15:46:26.413000 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.456746 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.456813 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.456836 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.456864 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.456886 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.560277 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.560345 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.560369 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.560400 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.560426 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.663506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.663623 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.663646 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.663673 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.663737 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.766505 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.766590 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.766606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.766628 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.766643 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.869615 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.869665 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.869678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.869698 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.869709 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.972479 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.972611 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.972636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.972666 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:26 crc kubenswrapper[4903]: I0128 15:46:26.972687 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:26Z","lastTransitionTime":"2026-01-28T15:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.075598 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.075636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.075663 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.075679 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.075689 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.178355 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.178412 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.178431 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.178457 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.178474 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.281870 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.281946 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.281970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.282003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.282025 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.385256 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.385321 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.385342 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.385371 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.385394 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.402956 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:05:37.164802892 +0000 UTC Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.413262 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.413323 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:27 crc kubenswrapper[4903]: E0128 15:46:27.413423 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:27 crc kubenswrapper[4903]: E0128 15:46:27.413636 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.489338 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.489414 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.489429 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.489452 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.489467 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.592808 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.592872 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.592910 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.592945 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.592967 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.695767 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.695826 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.695846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.695870 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.695888 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.798947 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.799012 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.799039 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.799062 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.799078 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.902584 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.902653 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.902674 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.902706 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:27 crc kubenswrapper[4903]: I0128 15:46:27.902730 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:27Z","lastTransitionTime":"2026-01-28T15:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.005666 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.005772 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.005783 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.005799 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.005808 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.108578 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.108638 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.108652 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.108667 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.108678 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.211087 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.211156 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.211167 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.211186 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.211196 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.314247 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.314394 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.314423 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.314452 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.314472 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.403823 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:59:49.598144063 +0000 UTC Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.413291 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.413332 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:28 crc kubenswrapper[4903]: E0128 15:46:28.413523 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:28 crc kubenswrapper[4903]: E0128 15:46:28.413754 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.418887 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.418951 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.418966 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.418988 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.419002 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.458614 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.472906 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.488367 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.506744 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.520645 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.520812 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.520922 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.521031 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.521142 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.528691 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.542912 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.562231 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.575908 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.589859 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.600627 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.612262 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.623125 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.623187 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.623208 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.623233 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.623251 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.626638 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.640764 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.656893 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.672809 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.688775 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.700124 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.710498 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:28Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.725571 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.725609 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.725619 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.725633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.725642 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.828321 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.828380 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.828402 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.828430 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.828451 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.932446 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.932947 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.932969 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.932997 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:28 crc kubenswrapper[4903]: I0128 15:46:28.933016 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:28Z","lastTransitionTime":"2026-01-28T15:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.036977 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.037010 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.037019 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.037032 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.037041 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.139823 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.139929 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.139949 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.139975 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.139992 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.243108 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.243179 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.243197 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.243223 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.243241 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.346078 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.346141 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.346157 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.346182 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.346200 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.405052 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:22:00.678864821 +0000 UTC Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.412413 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.412435 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:29 crc kubenswrapper[4903]: E0128 15:46:29.412688 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:29 crc kubenswrapper[4903]: E0128 15:46:29.412871 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.449636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.449740 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.449758 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.449782 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.449802 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.552851 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.552922 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.552934 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.552954 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.552967 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.655781 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.655846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.655866 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.655893 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.655918 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.758580 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.758622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.758636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.758652 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.758664 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.860884 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.861001 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.861014 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.861029 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.861037 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.963437 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.963477 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.963487 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.963501 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:29 crc kubenswrapper[4903]: I0128 15:46:29.963513 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:29Z","lastTransitionTime":"2026-01-28T15:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.066782 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.066830 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.066842 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.066863 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.066876 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.170578 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.170660 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.170685 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.170720 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.170744 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.273342 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.273411 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.273424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.273453 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.273467 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.376297 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.376351 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.376361 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.376376 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.376389 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.405684 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:44:30.517252382 +0000 UTC Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.413099 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.413127 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:30 crc kubenswrapper[4903]: E0128 15:46:30.413249 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:30 crc kubenswrapper[4903]: E0128 15:46:30.413377 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.479678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.479764 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.479787 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.479817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.479840 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.582510 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.582652 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.582680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.582717 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.582743 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.685562 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.685605 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.685615 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.685631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.685642 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.787928 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.787994 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.788016 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.788047 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.788070 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.890550 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.890605 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.890631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.890656 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.890669 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.993731 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.993761 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.993769 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.993783 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:30 crc kubenswrapper[4903]: I0128 15:46:30.993792 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:30Z","lastTransitionTime":"2026-01-28T15:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.097207 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.097265 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.097283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.097304 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.097314 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.202113 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.202186 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.202204 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.202232 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.202249 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.304475 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.304581 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.304601 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.304629 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.304651 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.405919 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:08:19.535125344 +0000 UTC Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.408663 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.408739 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.408765 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.408796 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.408824 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.413321 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.413347 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:31 crc kubenswrapper[4903]: E0128 15:46:31.413520 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:31 crc kubenswrapper[4903]: E0128 15:46:31.414000 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.415141 4903 scope.go:117] "RemoveContainer" containerID="8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566" Jan 28 15:46:31 crc kubenswrapper[4903]: E0128 15:46:31.415504 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.512200 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.512243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.512259 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.512278 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.512292 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.615486 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.615615 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.615648 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.615678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.615702 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.719442 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.719518 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.719568 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.719592 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.719606 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.822115 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.822164 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.822180 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.822202 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.822214 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.924062 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.924129 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.924144 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.924165 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:31 crc kubenswrapper[4903]: I0128 15:46:31.924178 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:31Z","lastTransitionTime":"2026-01-28T15:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.026565 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.026607 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.026619 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.026636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.026650 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.129497 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.129597 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.129624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.129653 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.129677 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.231771 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.231830 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.231840 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.231875 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.231885 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.334090 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.334128 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.334137 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.334150 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.334158 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.407201 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:49:59.942927563 +0000 UTC Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.412503 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:32 crc kubenswrapper[4903]: E0128 15:46:32.412693 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.412514 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:32 crc kubenswrapper[4903]: E0128 15:46:32.412823 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.436707 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.436791 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.436805 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.436825 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.436837 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.539577 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.539625 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.539640 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.539682 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.539699 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.643044 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.643081 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.643091 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.643106 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.643115 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.746379 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.746413 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.746424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.746439 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.746450 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.849196 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.849246 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.849263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.849286 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.849302 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.951876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.951957 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.951974 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.952027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:32 crc kubenswrapper[4903]: I0128 15:46:32.952046 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:32Z","lastTransitionTime":"2026-01-28T15:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.054711 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.054749 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.054761 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.054779 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.054790 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.157196 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.157257 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.157279 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.157309 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.157332 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.259507 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.259606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.259633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.259655 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.259671 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.362811 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.362859 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.362876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.362896 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.362912 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.407757 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:48:59.436664218 +0000 UTC Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.413078 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.413135 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:33 crc kubenswrapper[4903]: E0128 15:46:33.413268 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:33 crc kubenswrapper[4903]: E0128 15:46:33.413367 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.466095 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.466136 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.466147 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.466164 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.466177 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.568096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.568135 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.568146 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.568163 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.568175 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.671261 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.671308 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.671317 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.671334 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.671344 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.773823 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.773882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.773909 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.773933 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.773949 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.875929 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.875976 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.875987 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.876003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.876016 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.978142 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.978195 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.978206 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.978222 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:33 crc kubenswrapper[4903]: I0128 15:46:33.978238 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:33Z","lastTransitionTime":"2026-01-28T15:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.080701 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.080790 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.080806 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.080828 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.080843 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.183804 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.183860 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.183878 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.183902 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.183919 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.286195 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.286236 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.286263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.286279 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.286288 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.388598 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.388638 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.388646 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.388659 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.388668 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.407893 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:47:48.031455882 +0000 UTC Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.413357 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.413404 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:34 crc kubenswrapper[4903]: E0128 15:46:34.413465 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:34 crc kubenswrapper[4903]: E0128 15:46:34.413631 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.490586 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.490631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.490643 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.490657 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.490666 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.592935 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.592967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.592975 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.592988 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.592996 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.696249 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.696292 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.696301 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.696317 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.696329 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.798836 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.798891 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.798902 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.798919 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.798932 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.900916 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.900978 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.900997 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.901021 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:34 crc kubenswrapper[4903]: I0128 15:46:34.901064 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:34Z","lastTransitionTime":"2026-01-28T15:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.003026 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.003070 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.003079 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.003096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.003106 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.105550 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.105595 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.105607 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.105644 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.105657 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.208459 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.208507 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.208562 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.208580 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.208592 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.311271 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.311324 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.311340 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.311364 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.311384 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.408981 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:30:10.766761888 +0000 UTC Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.412408 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.412509 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.412673 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.412821 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.414466 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.414550 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.414561 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.414578 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.414589 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.425301 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.425476 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.425574 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:47:07.425551469 +0000 UTC m=+99.701523080 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.517295 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.517330 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.517340 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.517353 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.517362 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.566708 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.566757 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.566766 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.566783 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.566796 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.579721 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.583863 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.583921 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.583936 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.583961 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.583977 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.600024 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.608196 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.608243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.608258 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.608277 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.608288 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.621848 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.626569 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.626601 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.626609 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.626626 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.626639 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.638977 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.643913 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.643962 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.643974 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.643992 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.644004 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.659392 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:35 crc kubenswrapper[4903]: E0128 15:46:35.659567 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.661491 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.661583 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.661602 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.661633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.661651 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.763616 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.763667 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.763680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.763697 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.763709 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.865808 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.865845 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.865861 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.865883 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.865895 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.968214 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.968266 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.968277 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.968292 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:35 crc kubenswrapper[4903]: I0128 15:46:35.968303 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:35Z","lastTransitionTime":"2026-01-28T15:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.070781 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.070827 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.070836 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.070853 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.070864 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.173838 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.173877 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.173888 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.173906 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.173917 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.276873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.276927 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.276940 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.276954 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.276963 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.379327 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.379378 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.379391 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.379409 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.379421 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.409810 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:07:58.923502824 +0000 UTC Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.413176 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.413206 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:36 crc kubenswrapper[4903]: E0128 15:46:36.413342 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:36 crc kubenswrapper[4903]: E0128 15:46:36.413472 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.481563 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.481609 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.481622 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.481640 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.481652 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.583985 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.584031 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.584047 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.584069 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.584087 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.686896 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.686932 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.686943 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.686959 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.686970 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.789840 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.789883 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.789892 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.789906 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.789918 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.891993 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.892041 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.892053 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.892070 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.892081 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.994479 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.994556 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.994579 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.994603 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:36 crc kubenswrapper[4903]: I0128 15:46:36.994623 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:36Z","lastTransitionTime":"2026-01-28T15:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.097039 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.097074 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.097083 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.097097 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.097108 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.199351 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.199397 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.199408 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.199426 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.199437 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.301424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.301489 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.301506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.301574 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.301600 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.406794 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.406877 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.406894 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.406917 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.406933 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.410984 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:44:33.462761952 +0000 UTC Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.412322 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.412366 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:37 crc kubenswrapper[4903]: E0128 15:46:37.412615 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:37 crc kubenswrapper[4903]: E0128 15:46:37.412750 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.485523 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.509005 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.509032 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.509040 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.509054 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.509063 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.611424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.611450 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.611458 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.611470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.611479 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.713965 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.714014 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.714024 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.714039 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.714048 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.816717 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.816766 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.816797 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.816814 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.816824 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.864915 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/0.log" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.864974 4903 generic.go:334] "Generic (PLEG): container finished" podID="368501de-b207-4b6b-a0fb-eba74fe5ec74" containerID="ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f" exitCode=1 Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.865120 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerDied","Data":"ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.865736 4903 scope.go:117] "RemoveContainer" containerID="ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.879930 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.895223 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.906399 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.918155 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.919347 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.919371 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.919381 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.919394 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.919403 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:37Z","lastTransitionTime":"2026-01-28T15:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.929856 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.943336 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.964680 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.977049 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:37 crc kubenswrapper[4903]: I0128 15:46:37.987458 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.001732 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.013968 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.021723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.021752 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.021762 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.021776 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.021784 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.037051 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.054518 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.068991 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.083035 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.099072 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.116077 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.124979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.125247 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.125335 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.125441 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.125557 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.139459 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.153688 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.228439 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.228760 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.228827 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.228917 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.228996 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.331512 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.331562 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.331579 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.331595 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.331606 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.411272 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:20:52.903887794 +0000 UTC Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.412671 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.412673 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:38 crc kubenswrapper[4903]: E0128 15:46:38.412968 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:38 crc kubenswrapper[4903]: E0128 15:46:38.412983 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.433338 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.433374 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.433384 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.433403 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.433445 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.436005 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.448987 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.459504 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.477623 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.488570 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.498141 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.507327 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.521081 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.531952 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.535443 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.535475 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.535498 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.535514 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.535522 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.548418 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.559465 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.571029 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.580798 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.593672 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.605508 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.618896 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.630808 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.637795 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.637838 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.637850 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.637867 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.637879 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.642338 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.661255 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.739727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.739765 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.739775 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.739790 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.739799 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.842619 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.842658 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.842668 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.842684 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.842693 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.868593 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/0.log" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.868643 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerStarted","Data":"47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.885775 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.897148 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.909820 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.923764 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.937237 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.946370 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.946407 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.946416 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.946432 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.946443 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:38Z","lastTransitionTime":"2026-01-28T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.952471 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.970025 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.986382 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:38 crc kubenswrapper[4903]: I0128 15:46:38.997832 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.008986 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.019690 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.035871 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.047004 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.048322 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.048367 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.048377 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.048392 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.048400 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.058773 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.081862 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.094382 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.112391 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.123068 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.136027 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.150957 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.150991 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.151000 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.151016 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.151025 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.253434 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.253467 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.253477 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.253492 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.253501 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.355706 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.355759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.355770 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.355787 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.356139 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.412317 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:21:01.424933101 +0000 UTC Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.412395 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.412421 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:39 crc kubenswrapper[4903]: E0128 15:46:39.412598 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:39 crc kubenswrapper[4903]: E0128 15:46:39.412700 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.458419 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.458457 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.458469 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.458485 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.458497 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.560805 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.560847 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.560858 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.560872 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.560882 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.663615 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.663681 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.663698 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.663717 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.663729 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.766037 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.766091 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.766102 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.766122 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.766135 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.868359 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.868397 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.868408 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.868425 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.868436 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.970997 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.971039 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.971050 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.971066 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:39 crc kubenswrapper[4903]: I0128 15:46:39.971077 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:39Z","lastTransitionTime":"2026-01-28T15:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.073394 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.073452 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.073465 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.073482 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.073492 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.176044 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.176076 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.176086 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.176099 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.176108 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.278477 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.278519 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.278633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.278700 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.278722 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.381606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.381649 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.381661 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.381677 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.381692 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.413247 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:56:42.097210784 +0000 UTC Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.413365 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.413395 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:40 crc kubenswrapper[4903]: E0128 15:46:40.413465 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:40 crc kubenswrapper[4903]: E0128 15:46:40.413633 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.483512 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.483570 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.483582 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.483599 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.483615 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.586030 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.586070 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.587121 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.587161 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.587195 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.689979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.690018 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.690027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.690042 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.690051 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.791942 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.791972 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.791980 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.791992 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.792000 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.894255 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.894337 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.894350 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.894370 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.894382 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.996573 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.996604 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.996612 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.996625 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:40 crc kubenswrapper[4903]: I0128 15:46:40.996635 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:40Z","lastTransitionTime":"2026-01-28T15:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.098745 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.098779 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.098791 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.098809 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.098820 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.200851 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.200908 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.200925 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.200956 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.200980 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.303267 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.303366 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.303386 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.303408 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.303422 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.406876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.406918 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.406930 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.406946 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.406955 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.413303 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.413323 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.413389 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:18:40.844677114 +0000 UTC Jan 28 15:46:41 crc kubenswrapper[4903]: E0128 15:46:41.413415 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:41 crc kubenswrapper[4903]: E0128 15:46:41.413499 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.509481 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.509544 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.509557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.509576 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.509590 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.611901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.611978 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.612014 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.612049 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.612074 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.715206 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.715252 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.715265 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.715335 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.715349 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.817809 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.817854 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.817871 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.817892 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.817905 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.920621 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.920654 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.920663 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.920678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:41 crc kubenswrapper[4903]: I0128 15:46:41.920688 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:41Z","lastTransitionTime":"2026-01-28T15:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.023129 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.023174 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.023185 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.023201 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.023210 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.125593 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.125628 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.125639 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.125656 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.125668 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.227683 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.227732 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.227745 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.227762 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.227773 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.330436 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.330483 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.330494 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.330564 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.330583 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.412943 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.413001 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:42 crc kubenswrapper[4903]: E0128 15:46:42.413115 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:42 crc kubenswrapper[4903]: E0128 15:46:42.413340 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.413611 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:59:02.563532123 +0000 UTC Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.433261 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.433303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.433314 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.433333 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.433350 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.535495 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.535614 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.535639 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.535679 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.535702 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.638725 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.638778 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.638797 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.638817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.638830 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.741982 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.742049 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.742061 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.742082 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.742095 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.844979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.845015 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.845025 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.845040 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.845051 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.947340 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.947398 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.947413 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.947433 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:42 crc kubenswrapper[4903]: I0128 15:46:42.947449 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:42Z","lastTransitionTime":"2026-01-28T15:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.050606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.050654 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.050664 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.050682 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.050693 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.153142 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.153197 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.153227 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.153269 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.153295 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.255698 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.255749 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.255764 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.255783 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.255797 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.358243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.358336 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.358354 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.358380 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.358397 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.413111 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:43 crc kubenswrapper[4903]: E0128 15:46:43.413232 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.413121 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:43 crc kubenswrapper[4903]: E0128 15:46:43.413444 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.413695 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:49:42.172819655 +0000 UTC Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.461141 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.461198 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.461212 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.461234 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.461248 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.564611 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.564670 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.564687 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.564710 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.564726 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.667415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.667458 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.667470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.667485 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.667496 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.770194 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.770239 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.770251 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.770266 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.770277 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.873000 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.873033 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.873045 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.873061 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.873073 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.975319 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.975415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.975441 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.975466 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:43 crc kubenswrapper[4903]: I0128 15:46:43.975483 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:43Z","lastTransitionTime":"2026-01-28T15:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.078165 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.078197 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.078206 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.078220 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.078229 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.182184 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.182228 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.182237 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.182252 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.182266 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.284606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.284663 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.284681 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.284710 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.284728 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.387960 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.388042 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.388066 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.388095 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.388114 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.412685 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.412714 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:44 crc kubenswrapper[4903]: E0128 15:46:44.412888 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:44 crc kubenswrapper[4903]: E0128 15:46:44.413018 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.413801 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:23:54.026153279 +0000 UTC Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.491158 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.491213 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.491225 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.491247 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.491261 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.594735 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.594802 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.594819 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.594842 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.594857 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.697505 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.697557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.697570 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.697589 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.697605 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.801188 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.801234 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.801249 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.801270 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.801284 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.904022 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.904064 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.904074 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.904093 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:44 crc kubenswrapper[4903]: I0128 15:46:44.904104 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:44Z","lastTransitionTime":"2026-01-28T15:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.006355 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.006398 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.006412 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.006431 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.006447 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.109207 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.109293 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.109309 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.109351 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.109366 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.212708 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.212754 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.212766 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.212784 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.212799 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.314963 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.315017 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.315032 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.315054 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.315070 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.413291 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.413406 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.413443 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.413938 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:55:11.920561094 +0000 UTC Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.414759 4903 scope.go:117] "RemoveContainer" containerID="8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566" Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.415602 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.417156 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.417190 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.417199 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.417254 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.417278 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.520369 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.520406 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.520418 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.520435 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.520449 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.622823 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.622858 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.622869 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.622888 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.622900 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.727214 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.727293 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.727303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.727325 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.727339 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.728938 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.728989 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.729003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.729025 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.729041 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.746826 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.751566 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.751616 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.751630 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.751649 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.751667 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.766561 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.770049 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.770082 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.770092 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.770103 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.770111 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.779752 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.782441 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.782470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.782482 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.782500 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.782510 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.792270 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.795168 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.795194 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.795203 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.795215 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.795224 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.805940 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: E0128 15:46:45.806047 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.829774 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.829816 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.829835 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.829858 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.829870 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.890408 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/2.log" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.892955 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.894436 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.905208 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.917907 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.928976 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.931687 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.931719 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.931727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.931741 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.931751 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:45Z","lastTransitionTime":"2026-01-28T15:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.947927 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.964250 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.977784 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:45 crc kubenswrapper[4903]: I0128 15:46:45.991017 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.002745 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.016323 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.029101 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.034458 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.034495 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.034507 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.034543 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.034557 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.037999 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.055324 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.068947 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.077969 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.087788 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.097087 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.105694 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.121034 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.129978 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.136753 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.136778 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.136788 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.136802 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.136812 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.239491 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.239551 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.239565 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.239582 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.239595 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.341878 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.341921 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.341933 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.341949 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.341961 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.412401 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.412427 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:46 crc kubenswrapper[4903]: E0128 15:46:46.412618 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:46 crc kubenswrapper[4903]: E0128 15:46:46.412715 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.414547 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:54:27.658924998 +0000 UTC Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.443794 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.443844 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.443856 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.443878 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.443891 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.545872 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.545905 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.545913 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.545925 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.545934 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.648596 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.648644 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.648657 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.648678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.648691 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.750976 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.751008 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.751016 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.751029 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.751037 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.854298 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.854372 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.854393 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.854421 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.854439 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.898873 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/3.log" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.899944 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/2.log" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.902983 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" exitCode=1 Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.903026 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.903085 4903 scope.go:117] "RemoveContainer" containerID="8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.903731 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:46:46 crc kubenswrapper[4903]: E0128 15:46:46.903895 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.927298 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.942239 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.957301 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.957354 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.957378 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.957400 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.957420 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:46Z","lastTransitionTime":"2026-01-28T15:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.958770 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.973780 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:46 crc kubenswrapper[4903]: I0128 15:46:46.991349 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.006783 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.025209 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.038452 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.052203 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.059272 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.059312 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.059323 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.059340 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.059352 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.068277 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.084883 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.099098 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.122435 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.136738 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.149076 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.162703 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.162819 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.162915 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.163000 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.163069 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.169076 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.183198 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.199098 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.227513 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8833c0feb3b6e89f45d391eeb3493be9c4dfd5422e20dc1ab2e84eae84790566\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:18Z\\\",\\\"message\\\":\\\"IDName:}]\\\\nI0128 15:46:18.416393 6552 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator]} name:Service_openshift-machine-api/machine-api-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:46:18.416430 6552 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416441 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq\\\\nI0128 15:46:18.416449 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-5c5kq in node crc\\\\nI0128 15:46:18.416456 6552 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-5c5kq after 0 failed attempt(s)\\\\nI0128 15:46:18.416463 6552 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-p\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:46Z\\\",\\\"message\\\":\\\":46:46.293433 6951 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:46.293451 6951 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:46:46.293459 6951 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:46.293466 6951 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:46.293473 6951 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:46.293484 6951 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:46.293665 6951 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293706 6951 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293867 6951 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:46:46.294143 6951 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.294418 6951 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.273245 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.273289 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.273299 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.273313 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.273324 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.393663 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.393723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.393739 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.393763 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.393779 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.413282 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.413320 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:47 crc kubenswrapper[4903]: E0128 15:46:47.413415 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:47 crc kubenswrapper[4903]: E0128 15:46:47.413568 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.415337 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:02:27.044275096 +0000 UTC Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.495966 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.496004 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.496013 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.496030 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.496039 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.598197 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.598263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.598283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.598308 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.598326 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.700963 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.701013 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.701027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.701046 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.701059 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.804265 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.804308 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.804322 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.804349 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.804362 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.905920 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.905969 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.905983 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.906002 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.906012 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:47Z","lastTransitionTime":"2026-01-28T15:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.907799 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/3.log" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.911256 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:46:47 crc kubenswrapper[4903]: E0128 15:46:47.911390 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.923202 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.936035 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.956161 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:46Z\\\",\\\"message\\\":\\\":46:46.293433 6951 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:46.293451 6951 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:46:46.293459 6951 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:46.293466 6951 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:46.293473 6951 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:46.293484 6951 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:46.293665 6951 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293706 6951 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293867 6951 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:46:46.294143 6951 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.294418 6951 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.972141 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:47 crc kubenswrapper[4903]: I0128 15:46:47.988595 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.004630 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.008598 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.009033 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.009226 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.009303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.009569 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.026629 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.041344 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.056848 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.070933 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.081814 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.094850 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.104523 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.111857 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.111915 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.111929 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.111954 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.111966 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.115723 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.125195 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.141242 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.154227 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.163754 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.172979 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.213878 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.213917 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.213927 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.213944 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.213957 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.317270 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.317347 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.317370 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.317400 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.317426 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.412720 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:48 crc kubenswrapper[4903]: E0128 15:46:48.412908 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.413018 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:48 crc kubenswrapper[4903]: E0128 15:46:48.413144 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.415634 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:28:04.285510639 +0000 UTC Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.420386 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.420460 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.420489 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.420521 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.420587 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.437653 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.459888 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.477784 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.501370 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.518028 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.522572 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.522617 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.522631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.522653 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.522670 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.532345 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.558294 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.583381 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.600000 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.611668 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.624708 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.625322 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.625356 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.625367 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.625383 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.625394 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.648957 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.659205 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.668454 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.676248 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.697018 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.708333 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.725861 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:46Z\\\",\\\"message\\\":\\\":46:46.293433 6951 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:46.293451 6951 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:46:46.293459 6951 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:46.293466 6951 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:46.293473 6951 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:46.293484 6951 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:46.293665 6951 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293706 6951 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293867 6951 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:46:46.294143 6951 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.294418 6951 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.727775 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.727809 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.727817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.727834 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.727846 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.737027 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.829977 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.830018 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.830026 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.830040 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.830049 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.932428 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.932475 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.932489 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.932510 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:48 crc kubenswrapper[4903]: I0128 15:46:48.932550 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:48Z","lastTransitionTime":"2026-01-28T15:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.036019 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.036067 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.037772 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.037791 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.037804 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.139814 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.139860 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.139873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.139891 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.139904 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.242677 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.242733 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.242746 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.242767 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.242779 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.346842 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.346931 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.346965 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.346998 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.347021 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.412425 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.412499 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:49 crc kubenswrapper[4903]: E0128 15:46:49.412684 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:49 crc kubenswrapper[4903]: E0128 15:46:49.412858 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.415823 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:34:10.663290128 +0000 UTC Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.449698 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.449771 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.449796 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.449826 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.449849 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.552666 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.552714 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.552726 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.552743 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.552759 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.655742 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.656251 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.656270 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.656295 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.656316 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.758643 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.758727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.758752 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.758782 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.758808 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.862348 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.862413 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.862458 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.862484 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.862501 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.969457 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.969553 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.969568 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.969586 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:49 crc kubenswrapper[4903]: I0128 15:46:49.969599 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:49Z","lastTransitionTime":"2026-01-28T15:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.073104 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.073262 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.073294 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.073322 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.073338 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.175902 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.175943 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.175955 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.175973 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.175985 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.279600 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.279669 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.279698 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.279729 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.279751 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.383581 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.383678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.383715 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.383749 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.383772 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.413146 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.413234 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:50 crc kubenswrapper[4903]: E0128 15:46:50.413318 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:50 crc kubenswrapper[4903]: E0128 15:46:50.413411 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.416429 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:30:47.121681482 +0000 UTC Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.487586 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.487652 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.487674 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.487703 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.487727 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.591470 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.591517 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.591566 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.591590 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.591606 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.697421 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.697501 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.697569 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.697599 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.697617 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.801105 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.801157 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.801171 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.801192 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.801204 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.904321 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.904389 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.904408 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.904433 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:50 crc kubenswrapper[4903]: I0128 15:46:50.904450 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:50Z","lastTransitionTime":"2026-01-28T15:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.007102 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.007161 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.007182 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.007244 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.007262 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.109908 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.109970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.109987 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.110010 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.110028 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.212347 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.212415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.212439 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.212474 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.212512 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.315764 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.315836 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.315854 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.315879 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.315897 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.413308 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.413336 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:51 crc kubenswrapper[4903]: E0128 15:46:51.413459 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:51 crc kubenswrapper[4903]: E0128 15:46:51.413675 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.416765 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:13:09.792428284 +0000 UTC Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.423421 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.423520 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.423596 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.423637 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.423674 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.528011 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.528062 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.528071 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.528088 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.528098 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.631454 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.631579 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.631606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.631633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.631652 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.734291 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.734338 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.734353 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.734374 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.734387 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.838037 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.838096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.838115 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.838142 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.838160 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.944619 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.944690 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.944707 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.944736 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:51 crc kubenswrapper[4903]: I0128 15:46:51.944754 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:51Z","lastTransitionTime":"2026-01-28T15:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.048642 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.048689 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.048706 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.048730 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.048748 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.153284 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.153333 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.153345 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.153362 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.153373 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.256236 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.256284 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.256303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.256327 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.256345 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.359905 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.360139 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.360172 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.360202 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.360221 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.413158 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.413265 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.413378 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.413819 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.416974 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:46:19.754340447 +0000 UTC Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.419022 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.419199 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:47:56.419164618 +0000 UTC m=+148.695136169 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.419292 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.419364 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.419577 4903 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.419611 4903 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.419650 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:47:56.4196336 +0000 UTC m=+148.695605121 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.419700 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:47:56.419674531 +0000 UTC m=+148.695646092 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.462901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.462974 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.462999 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.463040 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.463064 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.520914 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.521039 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521220 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521248 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521267 4903 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521335 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:47:56.521311716 +0000 UTC m=+148.797283267 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521606 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521630 4903 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521641 4903 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:52 crc kubenswrapper[4903]: E0128 15:46:52.521688 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:47:56.521676985 +0000 UTC m=+148.797648506 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.566847 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.566894 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.566908 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.566931 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.566947 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.670873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.670948 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.670966 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.670993 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.671010 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.773986 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.774041 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.774059 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.774078 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.774094 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.877848 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.877927 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.877950 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.877975 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.877990 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.980728 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.980773 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.980784 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.980800 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:52 crc kubenswrapper[4903]: I0128 15:46:52.980812 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:52Z","lastTransitionTime":"2026-01-28T15:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.084202 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.084319 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.084386 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.084435 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.084491 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.189126 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.189192 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.189217 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.189249 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.189274 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.291703 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.291763 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.291776 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.291807 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.291861 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.394449 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.394506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.394522 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.394575 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.394589 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.412856 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.412955 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:53 crc kubenswrapper[4903]: E0128 15:46:53.413027 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:53 crc kubenswrapper[4903]: E0128 15:46:53.413100 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.417362 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:20:59.042018195 +0000 UTC Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.497654 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.497717 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.497737 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.497764 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.497783 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.601452 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.601525 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.601575 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.601601 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.601623 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.705563 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.705630 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.705648 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.705677 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.705696 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.807615 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.807658 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.807669 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.807687 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.807698 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.909521 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.909587 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.909600 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.909631 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:53 crc kubenswrapper[4903]: I0128 15:46:53.909678 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:53Z","lastTransitionTime":"2026-01-28T15:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.011914 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.011975 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.011991 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.012016 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.012031 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.114479 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.114518 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.114542 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.114557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.114567 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.217824 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.217860 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.217868 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.217882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.217890 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.320287 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.320326 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.320336 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.320351 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.320361 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.413244 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.413326 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:54 crc kubenswrapper[4903]: E0128 15:46:54.413638 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:54 crc kubenswrapper[4903]: E0128 15:46:54.413794 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.418230 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:33:52.425851543 +0000 UTC Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.422949 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.423246 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.423255 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.423267 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.423276 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.526106 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.526160 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.526174 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.526192 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.526204 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.628370 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.628417 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.628433 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.628456 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.628471 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.731036 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.731118 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.731145 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.731176 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.731201 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.834285 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.834312 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.834321 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.834335 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.834345 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.937052 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.937099 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.937112 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.937131 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:54 crc kubenswrapper[4903]: I0128 15:46:54.937145 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:54Z","lastTransitionTime":"2026-01-28T15:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.040205 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.040260 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.040272 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.040291 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.040303 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.142984 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.143036 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.143048 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.143070 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.143080 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.246955 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.246998 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.247010 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.247031 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.247043 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.349177 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.349240 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.349258 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.349276 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.349287 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.413273 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.413348 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.413447 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.413605 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.418519 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:01:28.02949374 +0000 UTC Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.451405 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.451458 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.451468 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.451483 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.451492 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.553799 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.553861 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.553871 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.553893 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.553906 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.656605 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.656656 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.656667 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.656686 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.656698 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.759644 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.759725 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.759737 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.759762 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.759773 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.861624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.861675 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.861685 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.861703 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.861714 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.862724 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.862805 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.862831 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.862867 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.862892 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.879419 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.884225 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.884283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.884293 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.884309 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.884319 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.897090 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.900972 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.901040 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.901062 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.901092 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.901114 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.915361 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.929642 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.929714 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.929740 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.929775 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.929796 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.953301 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.956979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.957030 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.957041 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.957055 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.957064 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.969837 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:55 crc kubenswrapper[4903]: E0128 15:46:55.970063 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.972630 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.972698 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.972722 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.972753 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:55 crc kubenswrapper[4903]: I0128 15:46:55.972774 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:55Z","lastTransitionTime":"2026-01-28T15:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.074988 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.075020 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.075037 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.075054 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.075066 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.178168 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.178249 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.178269 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.178297 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.178317 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.281801 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.281855 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.281873 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.281901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.281919 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.385210 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.385277 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.385296 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.385321 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.385339 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.412739 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:56 crc kubenswrapper[4903]: E0128 15:46:56.412931 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.413044 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:56 crc kubenswrapper[4903]: E0128 15:46:56.413289 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.418733 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:48:28.787188913 +0000 UTC Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.488733 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.488798 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.488816 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.488841 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.488858 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.591937 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.592020 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.592058 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.592090 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.592115 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.695205 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.695275 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.695294 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.695318 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.695335 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.797856 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.797923 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.797942 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.797970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.797991 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.901127 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.901159 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.901184 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.901201 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:56 crc kubenswrapper[4903]: I0128 15:46:56.901212 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:56Z","lastTransitionTime":"2026-01-28T15:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.004044 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.004130 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.004163 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.004211 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.004234 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.107924 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.107988 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.108004 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.108027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.108043 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.211753 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.211850 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.211865 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.211890 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.211904 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.315458 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.315509 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.315520 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.315555 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.315569 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.413215 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.413266 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:57 crc kubenswrapper[4903]: E0128 15:46:57.413706 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:57 crc kubenswrapper[4903]: E0128 15:46:57.414007 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.418585 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.418639 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.418658 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.418679 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.418696 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.425459 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:06:47.654826665 +0000 UTC Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.521370 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.521453 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.521481 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.521513 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.521584 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.625687 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.625761 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.625803 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.625841 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.625864 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.730520 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.730601 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.730643 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.730664 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.730673 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.833445 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.833565 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.833574 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.833592 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.833609 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.936396 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.936454 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.936465 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.936478 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:57 crc kubenswrapper[4903]: I0128 15:46:57.936487 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:57Z","lastTransitionTime":"2026-01-28T15:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.038876 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.038916 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.038943 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.038959 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.038968 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.144797 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.144852 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.144870 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.144890 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.144901 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.246936 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.246968 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.246976 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.246989 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.246997 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.349255 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.349294 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.349303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.349317 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.349326 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.413320 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.413421 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:46:58 crc kubenswrapper[4903]: E0128 15:46:58.413464 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:46:58 crc kubenswrapper[4903]: E0128 15:46:58.414836 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.425822 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:56:16.064206334 +0000 UTC Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.445857 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.452394 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.452759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.453123 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.453509 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.453746 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.462040 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.477468 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.490276 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.505473 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.519925 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.537045 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:46Z\\\",\\\"message\\\":\\\":46:46.293433 6951 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:46.293451 6951 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:46:46.293459 6951 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:46.293466 6951 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:46.293473 6951 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:46.293484 6951 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:46.293665 6951 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293706 6951 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293867 6951 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:46:46.294143 6951 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.294418 6951 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.553011 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.556478 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.556505 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.556515 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.556543 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.556555 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.564454 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.573503 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.586347 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.597637 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.609626 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.621562 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.638820 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.654242 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.658696 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.658733 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.658746 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.658764 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.658776 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.663746 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.674754 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.684264 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:46:58Z is after 2025-08-24T17:21:41Z" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.761128 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.761210 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.761233 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.761266 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.761288 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.863506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.863549 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.863575 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.863591 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.863601 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.966579 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.966646 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.966672 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.966701 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:58 crc kubenswrapper[4903]: I0128 15:46:58.966723 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:58Z","lastTransitionTime":"2026-01-28T15:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.069671 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.069728 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.069745 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.069769 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.069786 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.172779 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.172820 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.172831 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.172861 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.172871 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.276344 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.276411 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.276423 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.276441 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.276453 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.378687 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.378712 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.378720 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.378735 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.378743 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.413416 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.413416 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:46:59 crc kubenswrapper[4903]: E0128 15:46:59.413623 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:46:59 crc kubenswrapper[4903]: E0128 15:46:59.413688 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.426077 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:14:35.212460731 +0000 UTC Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.481936 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.482002 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.482019 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.482047 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.482066 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.585608 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.585742 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.585768 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.585797 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.585821 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.688837 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.688908 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.688935 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.688964 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.688988 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.792891 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.792956 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.792979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.793008 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.793030 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.895763 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.895800 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.895816 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.895836 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.895848 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.999191 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.999277 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.999310 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.999340 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:46:59 crc kubenswrapper[4903]: I0128 15:46:59.999362 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:46:59Z","lastTransitionTime":"2026-01-28T15:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.102918 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.103002 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.103024 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.103057 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.103082 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.206304 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.206437 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.206458 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.206482 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.206500 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.309431 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.309506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.309531 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.309602 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.309624 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.412444 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.412529 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.412593 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.412610 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.412609 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:00 crc kubenswrapper[4903]: E0128 15:47:00.412621 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.412630 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: E0128 15:47:00.412734 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.412793 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.413493 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:47:00 crc kubenswrapper[4903]: E0128 15:47:00.413696 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.426886 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:55:51.844352126 +0000 UTC Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.515493 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.515542 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.515557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.515594 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.515608 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.618588 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.618630 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.618640 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.618661 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.618673 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.721007 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.721071 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.721088 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.721142 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.721159 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.823864 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.824106 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.824225 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.824327 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.824415 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.926818 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.927332 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.927505 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.927723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:00 crc kubenswrapper[4903]: I0128 15:47:00.927885 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:00Z","lastTransitionTime":"2026-01-28T15:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.030676 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.030736 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.030754 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.030783 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.030802 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.133484 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.133557 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.133570 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.133586 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.133598 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.237064 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.237121 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.237134 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.237182 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.237193 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.339468 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.339518 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.339560 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.339584 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.339600 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.413324 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.413355 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:01 crc kubenswrapper[4903]: E0128 15:47:01.413509 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:01 crc kubenswrapper[4903]: E0128 15:47:01.413608 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.427795 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:57:10.283966587 +0000 UTC Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.442782 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.442833 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.442844 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.442861 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.442877 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.545744 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.545820 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.545844 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.545885 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.546003 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.648363 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.648449 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.648482 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.648511 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.648573 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.752226 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.752298 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.752335 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.752366 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.752396 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.855961 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.856024 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.856049 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.856076 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.856095 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.958582 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.958658 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.958677 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.958702 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:01 crc kubenswrapper[4903]: I0128 15:47:01.958738 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:01Z","lastTransitionTime":"2026-01-28T15:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.062095 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.062140 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.062155 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.062180 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.062198 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.165315 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.165350 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.165358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.165376 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.165386 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.269135 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.269168 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.269177 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.269194 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.269204 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.372302 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.372368 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.372386 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.372418 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.372473 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.413493 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:02 crc kubenswrapper[4903]: E0128 15:47:02.413742 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.413803 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:02 crc kubenswrapper[4903]: E0128 15:47:02.414001 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.427957 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:03:31.287741926 +0000 UTC Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.476401 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.476495 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.476522 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.476603 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.476629 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.580643 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.580690 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.580699 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.580713 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.580727 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.683368 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.683404 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.683415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.683431 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.683440 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.786849 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.786930 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.786946 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.786962 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.786974 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.889895 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.889960 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.889969 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.889984 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.889993 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.992682 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.992714 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.992723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.992736 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:02 crc kubenswrapper[4903]: I0128 15:47:02.992748 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:02Z","lastTransitionTime":"2026-01-28T15:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.095655 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.095689 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.095697 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.095716 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.095727 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.198415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.198483 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.198500 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.198554 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.198570 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.300983 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.301032 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.301043 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.301061 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.301079 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.403329 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.403369 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.403378 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.403393 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.403402 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.413000 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.413036 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:03 crc kubenswrapper[4903]: E0128 15:47:03.413174 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:03 crc kubenswrapper[4903]: E0128 15:47:03.413354 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.428378 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:41:06.118359377 +0000 UTC Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.506575 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.506636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.506648 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.506664 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.506673 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.609779 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.609872 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.609885 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.609907 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.609947 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.713678 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.713794 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.713831 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.713861 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.713878 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.816260 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.816314 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.816330 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.816352 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.816362 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.919606 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.919660 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.919680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.919701 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:03 crc kubenswrapper[4903]: I0128 15:47:03.919716 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:03Z","lastTransitionTime":"2026-01-28T15:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.022099 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.022132 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.022142 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.022172 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.022182 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.124837 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.124882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.124891 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.124910 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.124920 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.227642 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.227691 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.227705 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.227722 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.227733 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.329986 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.330027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.330057 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.330076 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.330091 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.413426 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:04 crc kubenswrapper[4903]: E0128 15:47:04.413695 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.413777 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:04 crc kubenswrapper[4903]: E0128 15:47:04.413938 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.428900 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:50:12.69438593 +0000 UTC Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.433196 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.433243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.433255 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.433275 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.433292 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.536059 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.536087 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.536096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.536109 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.536119 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.639202 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.639243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.639254 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.639273 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.639284 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.741915 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.741988 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.742005 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.742038 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.742054 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.844593 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.844630 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.844640 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.844657 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.844669 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.947430 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.947491 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.947504 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.947571 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:04 crc kubenswrapper[4903]: I0128 15:47:04.947595 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:04Z","lastTransitionTime":"2026-01-28T15:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.050399 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.050451 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.050472 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.050497 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.050515 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.153668 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.153727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.153739 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.153759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.153771 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.256352 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.256411 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.256425 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.256444 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.256457 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.359039 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.359112 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.359137 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.359166 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.359187 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.413202 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.413202 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:05 crc kubenswrapper[4903]: E0128 15:47:05.413486 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:05 crc kubenswrapper[4903]: E0128 15:47:05.413353 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.429306 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:16:46.778899838 +0000 UTC Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.461624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.461719 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.461739 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.461766 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.461784 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.563906 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.563945 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.563954 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.563967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.563978 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.666877 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.666957 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.666981 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.667013 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.667037 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.769648 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.769694 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.769706 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.769745 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.769758 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.872283 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.872341 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.872358 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.872385 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.872403 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.975899 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.975946 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.975958 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.975977 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.975989 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.988338 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.988369 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.988377 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.988392 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:05 crc kubenswrapper[4903]: I0128 15:47:05.988399 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:05Z","lastTransitionTime":"2026-01-28T15:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.007380 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.011682 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.011748 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.011765 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.011787 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.011802 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.029198 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.032892 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.032932 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.032945 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.032961 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.032976 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.048474 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.052777 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.052845 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.052857 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.052874 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.052886 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.069246 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.073441 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.073516 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.073576 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.073610 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.073631 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.089313 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9977edb2-96fc-47bd-97a1-108db3bc28fb\\\",\\\"systemUUID\\\":\\\"42f25525-e039-4b4b-9161-1620e166e9cf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.089477 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.091971 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.092011 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.092021 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.092038 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.092049 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.194996 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.195046 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.195063 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.195085 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.195101 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.297432 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.297491 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.297511 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.297540 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.297585 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.400605 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.400647 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.400657 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.400759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.400780 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.413606 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.413746 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.413834 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:06 crc kubenswrapper[4903]: E0128 15:47:06.413995 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.429946 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:13:34.059673058 +0000 UTC Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.503265 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.503321 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.503333 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.503347 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.503356 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.605382 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.605434 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.605443 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.605459 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.605470 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.707521 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.707571 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.707579 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.707591 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.707600 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.811437 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.811506 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.811523 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.811591 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.811608 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.914852 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.914919 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.914943 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.914978 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:06 crc kubenswrapper[4903]: I0128 15:47:06.915002 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:06Z","lastTransitionTime":"2026-01-28T15:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.017704 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.017756 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.017772 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.017791 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.017804 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.121161 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.121287 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.121309 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.121335 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.121352 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.224485 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.224597 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.224623 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.224653 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.224675 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.327901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.327952 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.327967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.327985 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.327997 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.413148 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.413167 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:07 crc kubenswrapper[4903]: E0128 15:47:07.413565 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:07 crc kubenswrapper[4903]: E0128 15:47:07.413641 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.430268 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.430298 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.430306 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.430318 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.430327 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.469110 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:20:32.431716799 +0000 UTC Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.484686 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:07 crc kubenswrapper[4903]: E0128 15:47:07.484833 4903 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:47:07 crc kubenswrapper[4903]: E0128 15:47:07.484909 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs podName:90b23d2e-fec0-494c-9a60-461cc16fe0ae nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.484889475 +0000 UTC m=+163.760861036 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs") pod "network-metrics-daemon-kq2bn" (UID: "90b23d2e-fec0-494c-9a60-461cc16fe0ae") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.532474 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.532543 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.532555 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.532571 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.532583 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.635022 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.635183 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.635201 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.635220 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.635232 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.737646 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.737817 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.737838 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.737909 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.737931 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.840280 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.840339 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.840362 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.840389 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.840412 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.943498 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.943538 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.943574 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.943592 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:07 crc kubenswrapper[4903]: I0128 15:47:07.943603 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:07Z","lastTransitionTime":"2026-01-28T15:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.046264 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.046312 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.046345 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.046361 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.046373 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.149399 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.149464 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.149481 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.149511 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.149563 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.251321 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.251363 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.251372 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.251387 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.251399 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.353841 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.353901 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.353915 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.353933 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.353951 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.412771 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.412892 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:08 crc kubenswrapper[4903]: E0128 15:47:08.413474 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:08 crc kubenswrapper[4903]: E0128 15:47:08.413591 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.437093 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26f4ad13-c9dd-4182-b77f-11f37a4e29d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a80a1d2804532dbea2cc48520a85c48480105804236866b3928ea35b9b4bc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65388d4fe58184f8cee108febe5698b8ae50861f36f60d23d3bf82f0d30bccd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa06e0143f01ad891103ce8d397b85ff67dc3657abd5a2027e8b46b5d6cd6b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93f999a500a7cb61fc04d15529eec168d83fca75e80add356fc47ece776c1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f0913ab71e598cfe07ba1a4e69eaf7159be9a5b3522fa4e75b047425f9e0df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34b918859d617391bf1a2242023a4eee09d221c0bc73359ba61323e9891e401e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3b4ccc653f26fc60c382ad8641343c5c2cbeb8743782ce672511cb110039e58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e994f73fc61baac543a40d98a486ba3c480bb75804ebafe61299f78abe612e55\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.449351 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7g6pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"368501de-b207-4b6b-a0fb-eba74fe5ec74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:37Z\\\",\\\"message\\\":\\\"2026-01-28T15:45:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5\\\\n2026-01-28T15:45:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1c40311c-e431-48cb-9b8d-67df302319c5 to /host/opt/cni/bin/\\\\n2026-01-28T15:45:52Z [verbose] multus-daemon started\\\\n2026-01-28T15:45:52Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:46:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcrs2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7g6pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.456632 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.456665 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.456700 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.456715 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.456724 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.461240 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xzz6z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e8165e7-4fdc-495d-9408-87fca9df790e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c6fabb36bc38aca528f26811f56ecd008a6c402fcb8c4b77e5e6a7db0aeb979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfzj2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xzz6z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.470130 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:12:24.743090808 +0000 UTC Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.471831 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90b23d2e-fec0-494c-9a60-461cc16fe0ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4cqkt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kq2bn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.481659 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca514244-2297-4a3d-add3-0d0ff467d671\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecf4d136ff273e946812144980a77060574b2bb2bd202638e909c4b9c376b627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d03f4839a5165b04340f2747a46dbfb52664cbfffa9a2d7d40e9f3222ad288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b77cf812928b2ae9f283ab8f907149b76e37ed1c233c74be0b277c5337a423ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa353f8fd563d9a4ec32623c402e1501a831d5c96c91ada32b42918603a3f159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.492623 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f54b7318595529c6f917d6589c17681b457d26932e550f68dfa9d83a8233a87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.508376 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29cc3edd-9664-4899-b496-47543927e256\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:46:46Z\\\",\\\"message\\\":\\\":46:46.293433 6951 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:46:46.293451 6951 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:46:46.293459 6951 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:46:46.293466 6951 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:46:46.293473 6951 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:46:46.293484 6951 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:46:46.293665 6951 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293706 6951 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.293867 6951 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:46:46.294143 6951 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:46:46.294418 6951 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:46:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwk55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dwbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.526156 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd494893-bf26-4c20-a223-cea43bdcb107\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dde86f1db352c23e28075a450bbb8b15a4956c2be614ba0703760b223c4ba26a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b335442253fc0cd8e1e41c7f8aa353d4ac37ff8fd58153788136bf7ab6a25309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4br8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:46:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4w7fz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.543712 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0566b7c5-190a-4000-9e3c-ff9d91235ccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a2f9a74ec4644541d919075c881851ef23df7210b87b8f64e62446a31dbab23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90f8a85bcadb71566fd34c62cc3a81a1566fb3fa4361a0200050c9c76ec5122f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c97927f3346e1b9361018626a2503cf6af5774d90d2fb3abc360c2f89d92356\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a73b2a4c06974edd7a018fad84015e100bc30cd42cae18a44c23e5ba965d0d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88adc0fbeb86d426b92069b6a6f5f8ed735d17abe8b218429f9a8e04024038e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c2418d8ca13d9e98adf3454e78395f86c1fa25d9d747876cdf8012e38315f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8bd0ddc4a29039336158529248fd9ad8725ee6e40fe68972a00f46851a6d026\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6f4ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5c5kq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.556197 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e09988e-81a2-4ed5-98a7-97fa0d9dba72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9f4b75aafd10d8dc60610a02fd44258d0c9b9e95ca425785ae7dfcb4767f54b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2c62b8d0db6803c720345b455b564c9ec30bfee8198902c2175f9ed5ad38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915ebcbdd0536500b64db1569dc6e600ceca64e6a031b21ea28c9441043f62c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.559875 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.559976 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.559991 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.560010 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.560024 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.567685 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26c57e9a-4fd7-46bd-b562-afc490cb6bf2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:46:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:45:47Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:45:32.558365 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:45:32.559761 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2529886345/tls.crt::/tmp/serving-cert-2529886345/tls.key\\\\\\\"\\\\nI0128 15:45:47.760374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:45:47.774722 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:45:47.774747 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:45:47.774770 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:45:47.774775 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:45:47.779703 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:45:47.779730 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779736 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:45:47.779742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:45:47.779747 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:45:47.779750 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:45:47.779755 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:45:47.779958 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:45:47.783853 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.578865 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20ae4881881eb7bf1d5a0d6191d833cb9988bc38641ee0318727c217199906da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.592857 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.607190 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.619739 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.629347 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vxz6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"466b540b-3447-4d30-a2e5-8c7755027e99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07c358eb05277d379266a04aadca5b4abb04da5f978214e67eca5843936d885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57jlv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vxz6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.639379 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fda8c15d-7f11-4d64-8b66-9ad429953fa3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2a31672781937d1c8fd7aa71265384b192884f7b0011f9d98a11af732496258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4487fd365c9b3a61a13ecf4b30724ea249a1bdc10b967b4343e68d252bd8e1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:45:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:45:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.653324 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a5cebe0fee3ebf877678de05b24fc35757d1b0bd4362942e4011cea1886233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e15d9791d38f63c44372836fa44ebebd5765530c968f73cfd928675f5888521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.662727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.662786 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.662806 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.662827 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.662845 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.666297 4903 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dacf7a8c-d645-4596-9266-092101fc3613\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7d629ffafaf2b171797614b9299fc9871be97f95282f3e05254910944804b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88bsb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:45:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-plxzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:47:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.765404 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.765495 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.765512 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.765566 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.765587 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.868192 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.868234 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.868249 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.868269 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.868281 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.971603 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.971651 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.971669 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.971693 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:08 crc kubenswrapper[4903]: I0128 15:47:08.971713 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:08Z","lastTransitionTime":"2026-01-28T15:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.074356 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.074745 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.074881 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.075003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.075153 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.178201 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.178290 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.178309 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.178340 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.178356 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.281574 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.281624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.281636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.281654 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.281666 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.384311 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.384346 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.384355 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.384370 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.384382 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.413215 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.413287 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:09 crc kubenswrapper[4903]: E0128 15:47:09.413345 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:09 crc kubenswrapper[4903]: E0128 15:47:09.413439 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.471046 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:10:40.444567939 +0000 UTC Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.487923 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.487969 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.488024 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.488047 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.488100 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.591346 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.591413 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.591427 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.591454 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.591472 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.694854 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.694917 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.694936 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.694959 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.694976 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.798031 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.798085 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.798098 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.798118 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.798133 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.904409 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.904498 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.904518 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.904633 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:09 crc kubenswrapper[4903]: I0128 15:47:09.904649 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:09Z","lastTransitionTime":"2026-01-28T15:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.006975 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.007017 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.007027 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.007040 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.007049 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.109475 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.109589 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.109624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.109653 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.109671 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.213025 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.213063 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.213073 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.213088 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.213099 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.314928 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.314962 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.314973 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.314991 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.315003 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.413258 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.413309 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:10 crc kubenswrapper[4903]: E0128 15:47:10.413386 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:10 crc kubenswrapper[4903]: E0128 15:47:10.413462 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.416553 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.416583 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.416595 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.416611 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.416621 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.471642 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:35:22.896354245 +0000 UTC Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.519330 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.519381 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.519393 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.519414 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.519427 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.622003 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.622054 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.622067 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.622085 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.622096 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.724303 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.724338 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.724349 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.724366 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.724378 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.827243 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.827299 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.827313 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.827329 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.827341 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.930163 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.930219 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.930231 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.930249 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:10 crc kubenswrapper[4903]: I0128 15:47:10.930261 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:10Z","lastTransitionTime":"2026-01-28T15:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.033325 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.033383 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.033394 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.033411 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.033423 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.135561 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.135614 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.135624 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.135643 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.135656 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.238671 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.238755 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.238775 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.238803 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.238821 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.343008 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.343090 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.343108 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.343140 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.343163 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.413627 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:11 crc kubenswrapper[4903]: E0128 15:47:11.413816 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.413638 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:11 crc kubenswrapper[4903]: E0128 15:47:11.414624 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.414967 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:47:11 crc kubenswrapper[4903]: E0128 15:47:11.415152 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.446105 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.446158 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.446166 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.446181 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.446190 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.471850 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:39:01.179697611 +0000 UTC Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.548789 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.548846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.548856 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.548882 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.548894 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.651687 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.651759 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.651777 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.651801 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.651818 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.754057 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.754095 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.754106 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.754157 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.754175 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.857257 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.857754 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.857777 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.857797 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.857810 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.961655 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.961731 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.961749 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.961784 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:11 crc kubenswrapper[4903]: I0128 15:47:11.961822 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:11Z","lastTransitionTime":"2026-01-28T15:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.064239 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.064314 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.064331 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.064350 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.064364 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.167627 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.167690 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.167703 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.167725 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.167738 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.269945 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.269990 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.270001 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.270017 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.270028 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.372769 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.372821 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.372832 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.372848 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.372859 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.412382 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:12 crc kubenswrapper[4903]: E0128 15:47:12.412609 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.412779 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:12 crc kubenswrapper[4903]: E0128 15:47:12.413158 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.472043 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:29:00.746297815 +0000 UTC Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.475145 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.475191 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.475204 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.475224 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.475236 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.578107 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.578148 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.578156 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.578170 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.578179 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.680881 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.680930 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.680948 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.680967 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.680981 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.783052 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.783098 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.783108 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.783124 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.783137 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.889560 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.889607 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.889619 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.889635 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:12 crc kubenswrapper[4903]: I0128 15:47:12.889650 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:12Z","lastTransitionTime":"2026-01-28T15:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.034363 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.034415 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.034434 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.034453 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.034465 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.137824 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.137903 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.137930 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.137963 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.137987 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.240486 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.240578 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.240600 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.240627 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.240645 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.344070 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.344131 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.344148 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.344171 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.344187 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.412579 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.412643 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:13 crc kubenswrapper[4903]: E0128 15:47:13.412730 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:13 crc kubenswrapper[4903]: E0128 15:47:13.413111 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.446586 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.446636 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.446648 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.446667 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.446678 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.472439 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:36:36.732944482 +0000 UTC Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.551080 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.551130 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.551147 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.551173 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.551190 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.654892 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.654990 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.655004 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.655036 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.655051 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.759182 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.759226 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.759236 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.759256 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.759292 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.862317 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.862396 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.862424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.862454 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.862477 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.966086 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.966144 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.966161 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.966187 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:13 crc kubenswrapper[4903]: I0128 15:47:13.966203 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:13Z","lastTransitionTime":"2026-01-28T15:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.069642 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.069688 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.069724 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.069742 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.069755 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.172720 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.172790 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.172804 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.172830 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.172852 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.276380 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.276449 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.276463 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.276487 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.276502 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.380352 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.380405 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.380420 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.380441 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.380455 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.413131 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.413149 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:14 crc kubenswrapper[4903]: E0128 15:47:14.413423 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:14 crc kubenswrapper[4903]: E0128 15:47:14.413608 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.473077 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:00:11.619041961 +0000 UTC Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.484077 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.484126 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.484140 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.484161 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.484177 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.588125 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.588227 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.588250 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.588278 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.588297 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.691820 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.691905 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.691941 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.691972 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.691993 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.795216 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.795263 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.795274 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.795297 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.795309 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.897846 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.897884 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.897892 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.897908 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:14 crc kubenswrapper[4903]: I0128 15:47:14.897917 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:14Z","lastTransitionTime":"2026-01-28T15:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.000910 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.000954 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.000969 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.000986 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.000998 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.104647 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.104709 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.104727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.104757 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.105046 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.208128 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.208184 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.208195 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.208213 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.208227 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.311371 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.311437 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.311460 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.311492 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.311519 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.412407 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:15 crc kubenswrapper[4903]: E0128 15:47:15.412544 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.412727 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:15 crc kubenswrapper[4903]: E0128 15:47:15.412780 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.413300 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.413334 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.413345 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.413361 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.413372 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.473848 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:15:47.749415364 +0000 UTC Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.515913 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.515970 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.515979 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.515995 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.516005 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.618382 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.618427 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.618443 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.618460 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.618471 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.721495 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.721563 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.721576 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.721592 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.721602 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.824611 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.824680 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.824697 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.824723 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.824748 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.927424 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.927491 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.927509 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.927566 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:15 crc kubenswrapper[4903]: I0128 15:47:15.927583 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:15Z","lastTransitionTime":"2026-01-28T15:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.029673 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.029719 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.029727 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.029740 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.029748 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:16Z","lastTransitionTime":"2026-01-28T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.132011 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.132065 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.132078 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.132096 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.132108 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:16Z","lastTransitionTime":"2026-01-28T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.135177 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.135229 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.135240 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.135256 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.135267 4903 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:47:16Z","lastTransitionTime":"2026-01-28T15:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.188192 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl"] Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.188627 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.190470 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.190813 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.190988 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.191901 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.232814 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.232793956 podStartE2EDuration="39.232793956s" podCreationTimestamp="2026-01-28 15:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.220011018 +0000 UTC m=+108.495982529" watchObservedRunningTime="2026-01-28 15:47:16.232793956 +0000 UTC m=+108.508765467" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.257340 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podStartSLOduration=87.257316747 podStartE2EDuration="1m27.257316747s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.245790361 +0000 UTC m=+108.521761872" watchObservedRunningTime="2026-01-28 15:47:16.257316747 +0000 UTC m=+108.533288248" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.282779 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=84.282758501 podStartE2EDuration="1m24.282758501s" podCreationTimestamp="2026-01-28 15:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.282500415 +0000 UTC m=+108.558471936" watchObservedRunningTime="2026-01-28 15:47:16.282758501 +0000 UTC m=+108.558730022" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.286262 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.286341 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.286368 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.286393 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.286661 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.299001 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7g6pn" podStartSLOduration=87.298985279 podStartE2EDuration="1m27.298985279s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.298894477 +0000 UTC m=+108.574865998" watchObservedRunningTime="2026-01-28 15:47:16.298985279 +0000 UTC m=+108.574956790" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.319570 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xzz6z" podStartSLOduration=87.319544358 podStartE2EDuration="1m27.319544358s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.309290614 +0000 UTC m=+108.585262135" watchObservedRunningTime="2026-01-28 15:47:16.319544358 +0000 UTC m=+108.595515869" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.319762 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4w7fz" podStartSLOduration=87.319757484 podStartE2EDuration="1m27.319757484s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.319017954 +0000 UTC m=+108.594989475" watchObservedRunningTime="2026-01-28 15:47:16.319757484 +0000 UTC m=+108.595728995" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.347328 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.347304032 podStartE2EDuration="56.347304032s" podCreationTimestamp="2026-01-28 15:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.335007436 +0000 UTC m=+108.610978947" watchObservedRunningTime="2026-01-28 15:47:16.347304032 +0000 UTC m=+108.623275553" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.387985 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.388045 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.388089 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.388113 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.388133 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.388982 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.389055 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.389107 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.402093 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.409459 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba9f9693-e7d0-430c-82b4-d9c2df2de4ea-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pgcbl\" (UID: \"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.412459 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.412492 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:16 crc kubenswrapper[4903]: E0128 15:47:16.412633 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:16 crc kubenswrapper[4903]: E0128 15:47:16.412722 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.429159 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vxz6b" podStartSLOduration=87.429136178 podStartE2EDuration="1m27.429136178s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.429134928 +0000 UTC m=+108.705106439" watchObservedRunningTime="2026-01-28 15:47:16.429136178 +0000 UTC m=+108.705107689" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.443784 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5c5kq" podStartSLOduration=87.443768124 podStartE2EDuration="1m27.443768124s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.442733378 +0000 UTC m=+108.718704899" watchObservedRunningTime="2026-01-28 15:47:16.443768124 +0000 UTC m=+108.719739635" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.474620 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.474600407 podStartE2EDuration="1m26.474600407s" podCreationTimestamp="2026-01-28 15:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.461430339 +0000 UTC m=+108.737401850" watchObservedRunningTime="2026-01-28 15:47:16.474600407 +0000 UTC m=+108.750571918" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.474752 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:44:59.75260069 +0000 UTC Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.474803 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.482035 4903 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.488630 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.488610998 podStartE2EDuration="1m28.488610998s" podCreationTimestamp="2026-01-28 15:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:16.475201223 +0000 UTC m=+108.751172744" watchObservedRunningTime="2026-01-28 15:47:16.488610998 +0000 UTC m=+108.764582509" Jan 28 15:47:16 crc kubenswrapper[4903]: I0128 15:47:16.501372 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" Jan 28 15:47:17 crc kubenswrapper[4903]: I0128 15:47:17.051780 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" event={"ID":"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea","Type":"ContainerStarted","Data":"b61d42edfa545c9b7446cf10b27b506cb51e0c0374bc0ce5d7fbded156d259bc"} Jan 28 15:47:17 crc kubenswrapper[4903]: I0128 15:47:17.052130 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" event={"ID":"ba9f9693-e7d0-430c-82b4-d9c2df2de4ea","Type":"ContainerStarted","Data":"36270236bf3e4c3fa91b476285fbe8d6b6946f91dabe790a7061d9925a3fc6b8"} Jan 28 15:47:17 crc kubenswrapper[4903]: I0128 15:47:17.067046 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pgcbl" podStartSLOduration=88.067021219 podStartE2EDuration="1m28.067021219s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:17.066318262 +0000 UTC m=+109.342289813" watchObservedRunningTime="2026-01-28 15:47:17.067021219 +0000 UTC m=+109.342992770" Jan 28 15:47:17 crc kubenswrapper[4903]: I0128 15:47:17.413420 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:17 crc kubenswrapper[4903]: I0128 15:47:17.413433 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:17 crc kubenswrapper[4903]: E0128 15:47:17.413630 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:17 crc kubenswrapper[4903]: E0128 15:47:17.413744 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:18 crc kubenswrapper[4903]: I0128 15:47:18.413352 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:18 crc kubenswrapper[4903]: E0128 15:47:18.414809 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:18 crc kubenswrapper[4903]: I0128 15:47:18.414858 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:18 crc kubenswrapper[4903]: E0128 15:47:18.415347 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:19 crc kubenswrapper[4903]: I0128 15:47:19.412586 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:19 crc kubenswrapper[4903]: I0128 15:47:19.412676 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:19 crc kubenswrapper[4903]: E0128 15:47:19.412734 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:19 crc kubenswrapper[4903]: E0128 15:47:19.412853 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:20 crc kubenswrapper[4903]: I0128 15:47:20.413291 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:20 crc kubenswrapper[4903]: E0128 15:47:20.413477 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:20 crc kubenswrapper[4903]: I0128 15:47:20.413583 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:20 crc kubenswrapper[4903]: E0128 15:47:20.413796 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:21 crc kubenswrapper[4903]: I0128 15:47:21.412677 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:21 crc kubenswrapper[4903]: I0128 15:47:21.412752 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:21 crc kubenswrapper[4903]: E0128 15:47:21.412848 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:21 crc kubenswrapper[4903]: E0128 15:47:21.412954 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:22 crc kubenswrapper[4903]: I0128 15:47:22.413179 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:22 crc kubenswrapper[4903]: I0128 15:47:22.413238 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:22 crc kubenswrapper[4903]: E0128 15:47:22.413321 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:22 crc kubenswrapper[4903]: E0128 15:47:22.413460 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:23 crc kubenswrapper[4903]: I0128 15:47:23.412747 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:23 crc kubenswrapper[4903]: I0128 15:47:23.412816 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:23 crc kubenswrapper[4903]: E0128 15:47:23.412905 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:23 crc kubenswrapper[4903]: E0128 15:47:23.412977 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.077092 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/1.log" Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.077980 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/0.log" Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.078040 4903 generic.go:334] "Generic (PLEG): container finished" podID="368501de-b207-4b6b-a0fb-eba74fe5ec74" containerID="47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31" exitCode=1 Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.078084 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerDied","Data":"47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31"} Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.078133 4903 scope.go:117] "RemoveContainer" containerID="ad3a1e68252ad15e80141b07798c3f1d623a8076c4cc6c32c9953ea32b5e976f" Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.078976 4903 scope.go:117] "RemoveContainer" containerID="47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31" Jan 28 15:47:24 crc kubenswrapper[4903]: E0128 15:47:24.079311 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7g6pn_openshift-multus(368501de-b207-4b6b-a0fb-eba74fe5ec74)\"" pod="openshift-multus/multus-7g6pn" podUID="368501de-b207-4b6b-a0fb-eba74fe5ec74" Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.413205 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:24 crc kubenswrapper[4903]: E0128 15:47:24.413370 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:24 crc kubenswrapper[4903]: I0128 15:47:24.413464 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:24 crc kubenswrapper[4903]: E0128 15:47:24.413716 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:25 crc kubenswrapper[4903]: I0128 15:47:25.083226 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/1.log" Jan 28 15:47:25 crc kubenswrapper[4903]: I0128 15:47:25.413196 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:25 crc kubenswrapper[4903]: E0128 15:47:25.413302 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:25 crc kubenswrapper[4903]: I0128 15:47:25.413343 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:25 crc kubenswrapper[4903]: E0128 15:47:25.413680 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:25 crc kubenswrapper[4903]: I0128 15:47:25.414601 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:47:25 crc kubenswrapper[4903]: E0128 15:47:25.414866 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dwbc4_openshift-ovn-kubernetes(29cc3edd-9664-4899-b496-47543927e256)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" Jan 28 15:47:26 crc kubenswrapper[4903]: I0128 15:47:26.412648 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:26 crc kubenswrapper[4903]: I0128 15:47:26.412766 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:26 crc kubenswrapper[4903]: E0128 15:47:26.412806 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:26 crc kubenswrapper[4903]: E0128 15:47:26.412921 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:27 crc kubenswrapper[4903]: I0128 15:47:27.413221 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:27 crc kubenswrapper[4903]: E0128 15:47:27.413352 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:27 crc kubenswrapper[4903]: I0128 15:47:27.413244 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:27 crc kubenswrapper[4903]: E0128 15:47:27.413578 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:28 crc kubenswrapper[4903]: E0128 15:47:28.404818 4903 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 15:47:28 crc kubenswrapper[4903]: I0128 15:47:28.412445 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:28 crc kubenswrapper[4903]: I0128 15:47:28.413776 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:28 crc kubenswrapper[4903]: E0128 15:47:28.414014 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:28 crc kubenswrapper[4903]: E0128 15:47:28.414260 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:28 crc kubenswrapper[4903]: E0128 15:47:28.514300 4903 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:47:29 crc kubenswrapper[4903]: I0128 15:47:29.412941 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:29 crc kubenswrapper[4903]: E0128 15:47:29.413188 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:29 crc kubenswrapper[4903]: I0128 15:47:29.413313 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:29 crc kubenswrapper[4903]: E0128 15:47:29.413430 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:30 crc kubenswrapper[4903]: I0128 15:47:30.413237 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:30 crc kubenswrapper[4903]: E0128 15:47:30.413372 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:30 crc kubenswrapper[4903]: I0128 15:47:30.413411 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:30 crc kubenswrapper[4903]: E0128 15:47:30.413694 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:31 crc kubenswrapper[4903]: I0128 15:47:31.412959 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:31 crc kubenswrapper[4903]: E0128 15:47:31.413064 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:31 crc kubenswrapper[4903]: I0128 15:47:31.412959 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:31 crc kubenswrapper[4903]: E0128 15:47:31.413323 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:32 crc kubenswrapper[4903]: I0128 15:47:32.413214 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:32 crc kubenswrapper[4903]: E0128 15:47:32.413339 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:32 crc kubenswrapper[4903]: I0128 15:47:32.413437 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:32 crc kubenswrapper[4903]: E0128 15:47:32.413658 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:33 crc kubenswrapper[4903]: I0128 15:47:33.413326 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:33 crc kubenswrapper[4903]: I0128 15:47:33.413369 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:33 crc kubenswrapper[4903]: E0128 15:47:33.413485 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:33 crc kubenswrapper[4903]: E0128 15:47:33.413649 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:33 crc kubenswrapper[4903]: E0128 15:47:33.515906 4903 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:47:34 crc kubenswrapper[4903]: I0128 15:47:34.413252 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:34 crc kubenswrapper[4903]: E0128 15:47:34.413415 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:34 crc kubenswrapper[4903]: I0128 15:47:34.413451 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:34 crc kubenswrapper[4903]: E0128 15:47:34.413651 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:35 crc kubenswrapper[4903]: I0128 15:47:35.413249 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:35 crc kubenswrapper[4903]: I0128 15:47:35.413404 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:35 crc kubenswrapper[4903]: E0128 15:47:35.413460 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:35 crc kubenswrapper[4903]: E0128 15:47:35.413678 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:36 crc kubenswrapper[4903]: I0128 15:47:36.412344 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:36 crc kubenswrapper[4903]: I0128 15:47:36.412427 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:36 crc kubenswrapper[4903]: E0128 15:47:36.412523 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:36 crc kubenswrapper[4903]: E0128 15:47:36.412712 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:37 crc kubenswrapper[4903]: I0128 15:47:37.412831 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:37 crc kubenswrapper[4903]: E0128 15:47:37.412971 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:37 crc kubenswrapper[4903]: I0128 15:47:37.413103 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:37 crc kubenswrapper[4903]: E0128 15:47:37.413681 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:37 crc kubenswrapper[4903]: I0128 15:47:37.413922 4903 scope.go:117] "RemoveContainer" containerID="47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31" Jan 28 15:47:38 crc kubenswrapper[4903]: I0128 15:47:38.124896 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/1.log" Jan 28 15:47:38 crc kubenswrapper[4903]: I0128 15:47:38.124956 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerStarted","Data":"8b220e2208dc7b263de1e53ad8af6f9ba881497ddd3302f155d27d444170c4b4"} Jan 28 15:47:38 crc kubenswrapper[4903]: I0128 15:47:38.413103 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:38 crc kubenswrapper[4903]: E0128 15:47:38.414458 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:38 crc kubenswrapper[4903]: I0128 15:47:38.414704 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:38 crc kubenswrapper[4903]: E0128 15:47:38.414965 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:38 crc kubenswrapper[4903]: E0128 15:47:38.516453 4903 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:47:39 crc kubenswrapper[4903]: I0128 15:47:39.412975 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:39 crc kubenswrapper[4903]: I0128 15:47:39.413091 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:39 crc kubenswrapper[4903]: E0128 15:47:39.413201 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:39 crc kubenswrapper[4903]: E0128 15:47:39.413356 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:40 crc kubenswrapper[4903]: I0128 15:47:40.412895 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:40 crc kubenswrapper[4903]: E0128 15:47:40.413062 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:40 crc kubenswrapper[4903]: I0128 15:47:40.413217 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:40 crc kubenswrapper[4903]: E0128 15:47:40.413839 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:40 crc kubenswrapper[4903]: I0128 15:47:40.414337 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:47:41 crc kubenswrapper[4903]: I0128 15:47:41.140321 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/3.log" Jan 28 15:47:41 crc kubenswrapper[4903]: I0128 15:47:41.144858 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerStarted","Data":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} Jan 28 15:47:41 crc kubenswrapper[4903]: I0128 15:47:41.145346 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:47:41 crc kubenswrapper[4903]: I0128 15:47:41.192084 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podStartSLOduration=112.192064612 podStartE2EDuration="1m52.192064612s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:47:41.191743403 +0000 UTC m=+133.467714934" watchObservedRunningTime="2026-01-28 15:47:41.192064612 +0000 UTC m=+133.468036113" Jan 28 15:47:41 crc kubenswrapper[4903]: I0128 15:47:41.335133 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq2bn"] Jan 28 15:47:41 crc kubenswrapper[4903]: I0128 15:47:41.335269 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:41 crc kubenswrapper[4903]: E0128 15:47:41.335367 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:41 crc kubenswrapper[4903]: I0128 15:47:41.413284 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:41 crc kubenswrapper[4903]: E0128 15:47:41.413484 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:42 crc kubenswrapper[4903]: I0128 15:47:42.413119 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:42 crc kubenswrapper[4903]: E0128 15:47:42.413290 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:42 crc kubenswrapper[4903]: I0128 15:47:42.413140 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:42 crc kubenswrapper[4903]: E0128 15:47:42.413600 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:43 crc kubenswrapper[4903]: I0128 15:47:43.413113 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:43 crc kubenswrapper[4903]: E0128 15:47:43.413245 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:43 crc kubenswrapper[4903]: I0128 15:47:43.413113 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:43 crc kubenswrapper[4903]: E0128 15:47:43.413454 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:43 crc kubenswrapper[4903]: E0128 15:47:43.518735 4903 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:47:44 crc kubenswrapper[4903]: I0128 15:47:44.412889 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:44 crc kubenswrapper[4903]: E0128 15:47:44.413045 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:44 crc kubenswrapper[4903]: I0128 15:47:44.413256 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:44 crc kubenswrapper[4903]: E0128 15:47:44.413333 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:45 crc kubenswrapper[4903]: I0128 15:47:45.413169 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:45 crc kubenswrapper[4903]: E0128 15:47:45.413345 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:45 crc kubenswrapper[4903]: I0128 15:47:45.414262 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:45 crc kubenswrapper[4903]: E0128 15:47:45.414631 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:46 crc kubenswrapper[4903]: I0128 15:47:46.412717 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:46 crc kubenswrapper[4903]: I0128 15:47:46.412790 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:46 crc kubenswrapper[4903]: E0128 15:47:46.414134 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:46 crc kubenswrapper[4903]: E0128 15:47:46.414648 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:47 crc kubenswrapper[4903]: I0128 15:47:47.412579 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:47 crc kubenswrapper[4903]: I0128 15:47:47.412612 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:47 crc kubenswrapper[4903]: E0128 15:47:47.412790 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kq2bn" podUID="90b23d2e-fec0-494c-9a60-461cc16fe0ae" Jan 28 15:47:47 crc kubenswrapper[4903]: E0128 15:47:47.412893 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:47:48 crc kubenswrapper[4903]: I0128 15:47:48.413340 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:48 crc kubenswrapper[4903]: I0128 15:47:48.413366 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:48 crc kubenswrapper[4903]: E0128 15:47:48.415521 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:47:48 crc kubenswrapper[4903]: E0128 15:47:48.415697 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:47:49 crc kubenswrapper[4903]: I0128 15:47:49.412771 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:49 crc kubenswrapper[4903]: I0128 15:47:49.412868 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:47:49 crc kubenswrapper[4903]: I0128 15:47:49.415406 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:47:49 crc kubenswrapper[4903]: I0128 15:47:49.415589 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:47:49 crc kubenswrapper[4903]: I0128 15:47:49.415615 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:47:49 crc kubenswrapper[4903]: I0128 15:47:49.417947 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:47:50 crc kubenswrapper[4903]: I0128 15:47:50.412662 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:50 crc kubenswrapper[4903]: I0128 15:47:50.412756 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:50 crc kubenswrapper[4903]: I0128 15:47:50.416436 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:47:50 crc kubenswrapper[4903]: I0128 15:47:50.417185 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:47:55 crc kubenswrapper[4903]: I0128 15:47:55.293408 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.430638 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:47:56 crc kubenswrapper[4903]: E0128 15:47:56.430839 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:49:58.430800239 +0000 UTC m=+270.706771810 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.431343 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.431416 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.436983 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.439814 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.532384 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.532433 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.535157 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.538014 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.613710 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.613785 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.630819 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.733302 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.740967 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:47:56 crc kubenswrapper[4903]: I0128 15:47:56.981072 4903 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.023353 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-znp46"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.028945 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.029250 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-48dgn"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.029683 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tcmkg"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.029838 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.029923 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.030390 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.030743 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.031009 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.031270 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.034589 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.035185 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.041010 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-522t5"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.041551 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.044353 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.044879 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.045086 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.045591 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.047143 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.047575 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.051485 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmb7b"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.051920 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8t5gp"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.052128 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hwxwx"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.052487 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.053031 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.053232 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.055280 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.055871 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.056108 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5ddtc"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.056599 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.057411 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.057793 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.057814 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.061885 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.062516 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.084737 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.085257 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.085559 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.086130 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jn67q"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.086158 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.086976 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.087005 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.087163 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.087832 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.088030 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.088213 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.088398 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.088626 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.088755 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.107011 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.107660 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w6pt2"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.108010 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.108516 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.108577 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.108655 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.108773 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.109997 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zxr6z"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.110359 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.110783 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.111735 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.112047 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.112671 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.127451 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.128334 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.128957 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dqbbb"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.129378 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.129637 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.129951 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.129976 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kr4qg"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.130129 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.130227 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.130722 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-znp46"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.130749 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.134664 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.135166 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.135768 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.138190 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.139224 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.139409 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.140022 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141388 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e03c0f97-6757-450b-a33d-d76ba42fd4b7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141443 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c32e095-4835-4959-88e5-f061f89b5c41-config\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141486 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-client-ca\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141514 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36423a-6685-4edb-b85f-f6aded8a37a7-config\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141566 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-client-ca\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141592 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f43563c-173f-4276-ac59-02fc755b6585-serving-cert\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141618 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03c0f97-6757-450b-a33d-d76ba42fd4b7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141644 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-config\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141670 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xx5q\" (UniqueName: \"kubernetes.io/projected/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-kube-api-access-5xx5q\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141700 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92ef8d59-61e9-4e51-97ca-58f14e72535f-serving-cert\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141728 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fe36423a-6685-4edb-b85f-f6aded8a37a7-images\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141751 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-client\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141772 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6e2b7db2-b2c4-4975-b84d-4772de0bae9c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w6pt2\" (UID: \"6e2b7db2-b2c4-4975-b84d-4772de0bae9c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141796 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03c0f97-6757-450b-a33d-d76ba42fd4b7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141820 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-ca\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141845 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-config\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141866 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrr5\" (UniqueName: \"kubernetes.io/projected/92ef8d59-61e9-4e51-97ca-58f14e72535f-kube-api-access-hsrr5\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141885 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c32e095-4835-4959-88e5-f061f89b5c41-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141910 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns8b9\" (UniqueName: \"kubernetes.io/projected/fe36423a-6685-4edb-b85f-f6aded8a37a7-kube-api-access-ns8b9\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141937 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-config\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141963 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-serving-cert\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.141991 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c32e095-4835-4959-88e5-f061f89b5c41-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.142012 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-service-ca\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.142033 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8r8g\" (UniqueName: \"kubernetes.io/projected/6e2b7db2-b2c4-4975-b84d-4772de0bae9c-kube-api-access-r8r8g\") pod \"multus-admission-controller-857f4d67dd-w6pt2\" (UID: \"6e2b7db2-b2c4-4975-b84d-4772de0bae9c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.142060 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcfqg\" (UniqueName: \"kubernetes.io/projected/9f43563c-173f-4276-ac59-02fc755b6585-kube-api-access-lcfqg\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.142083 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd98n\" (UniqueName: \"kubernetes.io/projected/e03c0f97-6757-450b-a33d-d76ba42fd4b7-kube-api-access-cd98n\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.142124 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe36423a-6685-4edb-b85f-f6aded8a37a7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.142150 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.149236 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: W0128 15:47:57.155605 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3dce9740f10bb33e1b8949ec0b18b90e0c624fbc8beaae300809fd4b96bb5099 WatchSource:0}: Error finding container 3dce9740f10bb33e1b8949ec0b18b90e0c624fbc8beaae300809fd4b96bb5099: Status 404 returned error can't find the container with id 3dce9740f10bb33e1b8949ec0b18b90e0c624fbc8beaae300809fd4b96bb5099 Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.164980 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.165459 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.165717 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.166030 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.166211 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.166487 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.167120 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.167380 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180166 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180357 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180473 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180565 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180645 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180735 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180806 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180874 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180939 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180990 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.180958 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181337 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181489 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181623 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181631 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181720 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181726 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181870 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.181977 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.182067 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.182162 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.182299 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.182572 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.182719 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-88mbt"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.183295 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.183605 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fp7dl"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.183625 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.188387 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.188868 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.188886 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.189074 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.189101 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.194877 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.195080 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.195172 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.195283 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.195435 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.195626 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.195759 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.195871 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.196263 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.200751 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.200940 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201107 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201190 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201186 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201263 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201354 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201392 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201476 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201498 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201573 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201603 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201359 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201718 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201794 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201847 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.201988 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202056 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202147 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202217 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202281 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202351 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202446 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202514 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.202618 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.203179 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.203289 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.203318 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.203407 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.203500 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.207681 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.208052 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.235581 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.236701 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.238377 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.238626 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.239275 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.245005 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.245977 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bp7hn"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.246801 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.247775 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.248510 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.249281 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.249959 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250872 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03c0f97-6757-450b-a33d-d76ba42fd4b7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250896 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-config\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250916 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xx5q\" (UniqueName: \"kubernetes.io/projected/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-kube-api-access-5xx5q\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250934 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fe36423a-6685-4edb-b85f-f6aded8a37a7-images\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250949 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92ef8d59-61e9-4e51-97ca-58f14e72535f-serving-cert\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250965 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-client\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250982 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6e2b7db2-b2c4-4975-b84d-4772de0bae9c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w6pt2\" (UID: \"6e2b7db2-b2c4-4975-b84d-4772de0bae9c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.250997 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03c0f97-6757-450b-a33d-d76ba42fd4b7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251014 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-ca\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251029 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-config\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251044 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrr5\" (UniqueName: \"kubernetes.io/projected/92ef8d59-61e9-4e51-97ca-58f14e72535f-kube-api-access-hsrr5\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251059 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c32e095-4835-4959-88e5-f061f89b5c41-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251073 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-config\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251087 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns8b9\" (UniqueName: \"kubernetes.io/projected/fe36423a-6685-4edb-b85f-f6aded8a37a7-kube-api-access-ns8b9\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251112 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-serving-cert\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251130 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c32e095-4835-4959-88e5-f061f89b5c41-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251146 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcfqg\" (UniqueName: \"kubernetes.io/projected/9f43563c-173f-4276-ac59-02fc755b6585-kube-api-access-lcfqg\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251161 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd98n\" (UniqueName: \"kubernetes.io/projected/e03c0f97-6757-450b-a33d-d76ba42fd4b7-kube-api-access-cd98n\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251176 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-service-ca\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251190 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8r8g\" (UniqueName: \"kubernetes.io/projected/6e2b7db2-b2c4-4975-b84d-4772de0bae9c-kube-api-access-r8r8g\") pod \"multus-admission-controller-857f4d67dd-w6pt2\" (UID: \"6e2b7db2-b2c4-4975-b84d-4772de0bae9c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251216 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe36423a-6685-4edb-b85f-f6aded8a37a7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251233 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251248 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e03c0f97-6757-450b-a33d-d76ba42fd4b7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251264 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c32e095-4835-4959-88e5-f061f89b5c41-config\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251282 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-client-ca\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251297 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36423a-6685-4edb-b85f-f6aded8a37a7-config\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251320 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-client-ca\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251337 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f43563c-173f-4276-ac59-02fc755b6585-serving-cert\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251620 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.251718 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.252959 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.253071 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.253889 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.255128 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.255169 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f43563c-173f-4276-ac59-02fc755b6585-serving-cert\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.255826 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-serving-cert\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.258171 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-48dgn"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.261106 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.261323 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.261460 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8t5gp"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.263925 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-client-ca\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.264996 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.265755 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-config\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.267359 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-client-ca\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.270214 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"11ca473aebcca54f7748ca610f46f650c622fb1d747c9b7e477fc3ec0b33f876"} Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.270290 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"831baf1fd25059a0cb9b32f8015bfb92d8c4f6abb2534d2f42f993dd043856b4"} Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.271703 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.271939 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tcmkg"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.274982 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.274990 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-config\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.275434 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-service-ca\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.275580 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-ca\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.275904 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.276171 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c32e095-4835-4959-88e5-f061f89b5c41-config\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.277270 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92ef8d59-61e9-4e51-97ca-58f14e72535f-serving-cert\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.282219 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/92ef8d59-61e9-4e51-97ca-58f14e72535f-etcd-client\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.282376 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fe36423a-6685-4edb-b85f-f6aded8a37a7-images\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.282725 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3dce9740f10bb33e1b8949ec0b18b90e0c624fbc8beaae300809fd4b96bb5099"} Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.283812 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe36423a-6685-4edb-b85f-f6aded8a37a7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.283907 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36423a-6685-4edb-b85f-f6aded8a37a7-config\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.284466 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"31915165071f2fa3e4997ba04eb2a3ff2219760e8b0b6273400fcbdd0b70b84d"} Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.284521 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.284809 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e03c0f97-6757-450b-a33d-d76ba42fd4b7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.285314 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.285323 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6e2b7db2-b2c4-4975-b84d-4772de0bae9c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w6pt2\" (UID: \"6e2b7db2-b2c4-4975-b84d-4772de0bae9c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.285343 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c32e095-4835-4959-88e5-f061f89b5c41-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.288090 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e03c0f97-6757-450b-a33d-d76ba42fd4b7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.289742 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmb7b"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.291254 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.292637 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.298245 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5ddtc"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.298416 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.299335 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-config\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.302451 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-522t5"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.306917 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.308744 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.310250 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fz85j"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.312401 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w6pt2"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.312602 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.312892 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.313179 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.314858 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.315963 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.317802 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.318784 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zxr6z"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.320101 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.321315 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9cpcp"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.322345 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.323348 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.325048 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dqbbb"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.327979 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hwxwx"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.333947 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.342874 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.342925 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.344153 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jn67q"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.345455 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.346464 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.347698 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.349131 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.350729 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fz85j"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.352098 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.352264 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.353093 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.354204 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.355186 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fp7dl"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.356205 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9cpcp"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.357299 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.358684 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bp7hn"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.361317 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.362628 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-88mbt"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.363381 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xgnx2"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.365865 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8v8wj"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.366009 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.366845 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8v8wj"] Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.367043 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.372642 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.392701 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.412245 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.433551 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.455190 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.492211 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.511789 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.532677 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.552907 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.572265 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.592448 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.612096 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.632057 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.652170 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.679797 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.693292 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.712738 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.732540 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.752843 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.773396 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.793453 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.813791 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.833026 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.852775 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.873377 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.892242 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.918148 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.932865 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.952948 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:47:57 crc kubenswrapper[4903]: I0128 15:47:57.972890 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.001233 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.012024 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.033410 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.065870 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.072424 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.093159 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.115206 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.133300 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.151246 4903 request.go:700] Waited for 1.010610596s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&limit=500&resourceVersion=0 Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.153790 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.174015 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.203689 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.213048 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.232675 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.253292 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.273310 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.290224 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c3ceca943ef733472cc10b2b7168f0ecf9e1b79b235e02449d6fe6530d9b964e"} Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.291793 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dfed4c4f26f4ad082dc57f81118c4636b2ecdb225740ab8440984987208182fb"} Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.291890 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.292508 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.312078 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.332865 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.352754 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.372740 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.392576 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.412967 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.432481 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.452562 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.486310 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.492448 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.533074 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.552295 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.572908 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.591796 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.612466 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.652416 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8r8g\" (UniqueName: \"kubernetes.io/projected/6e2b7db2-b2c4-4975-b84d-4772de0bae9c-kube-api-access-r8r8g\") pod \"multus-admission-controller-857f4d67dd-w6pt2\" (UID: \"6e2b7db2-b2c4-4975-b84d-4772de0bae9c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.666235 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns8b9\" (UniqueName: \"kubernetes.io/projected/fe36423a-6685-4edb-b85f-f6aded8a37a7-kube-api-access-ns8b9\") pod \"machine-api-operator-5694c8668f-hwxwx\" (UID: \"fe36423a-6685-4edb-b85f-f6aded8a37a7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.691034 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcfqg\" (UniqueName: \"kubernetes.io/projected/9f43563c-173f-4276-ac59-02fc755b6585-kube-api-access-lcfqg\") pod \"route-controller-manager-6576b87f9c-t4vvt\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.701716 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.705864 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c32e095-4835-4959-88e5-f061f89b5c41-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-s8rwr\" (UID: \"2c32e095-4835-4959-88e5-f061f89b5c41\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.726995 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd98n\" (UniqueName: \"kubernetes.io/projected/e03c0f97-6757-450b-a33d-d76ba42fd4b7-kube-api-access-cd98n\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.747776 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrr5\" (UniqueName: \"kubernetes.io/projected/92ef8d59-61e9-4e51-97ca-58f14e72535f-kube-api-access-hsrr5\") pod \"etcd-operator-b45778765-jn67q\" (UID: \"92ef8d59-61e9-4e51-97ca-58f14e72535f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.751723 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.772897 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.792067 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.813021 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.851132 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xx5q\" (UniqueName: \"kubernetes.io/projected/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-kube-api-access-5xx5q\") pod \"controller-manager-879f6c89f-znp46\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.864648 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e03c0f97-6757-450b-a33d-d76ba42fd4b7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-f5mnt\" (UID: \"e03c0f97-6757-450b-a33d-d76ba42fd4b7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.873473 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.874283 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.876096 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w6pt2"] Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.886634 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:47:58 crc kubenswrapper[4903]: W0128 15:47:58.886890 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e2b7db2_b2c4_4975_b84d_4772de0bae9c.slice/crio-79bc3d6dd65da696bf662b1b878cb485a4dad1f5dfd1e5fbe8a6e9e948077832 WatchSource:0}: Error finding container 79bc3d6dd65da696bf662b1b878cb485a4dad1f5dfd1e5fbe8a6e9e948077832: Status 404 returned error can't find the container with id 79bc3d6dd65da696bf662b1b878cb485a4dad1f5dfd1e5fbe8a6e9e948077832 Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.892251 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.892295 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.913403 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.933124 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.938318 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.953325 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.969628 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.972655 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.976223 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" Jan 28 15:47:58 crc kubenswrapper[4903]: I0128 15:47:58.992962 4903 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.016706 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.017675 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.032730 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:47:59 crc kubenswrapper[4903]: W0128 15:47:59.035737 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c32e095_4835_4959_88e5_f061f89b5c41.slice/crio-de965750bd3d9b3d1984915e327c9bf413f67dc393b7c0cbac6607c679e27a5f WatchSource:0}: Error finding container de965750bd3d9b3d1984915e327c9bf413f67dc393b7c0cbac6607c679e27a5f: Status 404 returned error can't find the container with id de965750bd3d9b3d1984915e327c9bf413f67dc393b7c0cbac6607c679e27a5f Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.053773 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.074489 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.092422 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.112802 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.133420 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.152905 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.168125 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-znp46"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.171362 4903 request.go:700] Waited for 1.805015968s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.174718 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.192547 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:47:59 crc kubenswrapper[4903]: W0128 15:47:59.193298 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d82ab75_41cc_46c6_8ffb_7e81bc29cfff.slice/crio-fdc6fc75a1f514d7d4524fe336089c0a93d818ccfeb33dda26db278af90c399e WatchSource:0}: Error finding container fdc6fc75a1f514d7d4524fe336089c0a93d818ccfeb33dda26db278af90c399e: Status 404 returned error can't find the container with id fdc6fc75a1f514d7d4524fe336089c0a93d818ccfeb33dda26db278af90c399e Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.207677 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hwxwx"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.213332 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.237674 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275489 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-bound-sa-token\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275547 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-encryption-config\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275565 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nh6z\" (UniqueName: \"kubernetes.io/projected/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-kube-api-access-7nh6z\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275585 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275601 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-certificates\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275619 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdvn\" (UniqueName: \"kubernetes.io/projected/a1c4af21-1253-4476-8f98-98377ab79e81-kube-api-access-hwdvn\") pod \"downloads-7954f5f757-tcmkg\" (UID: \"a1c4af21-1253-4476-8f98-98377ab79e81\") " pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275832 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275856 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dpdv\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-kube-api-access-9dpdv\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275872 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-auth-proxy-config\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275892 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-config\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275916 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-service-ca-bundle\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275935 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-etcd-serving-ca\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275953 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-serving-cert\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275973 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-trusted-ca-bundle\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.275989 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2hj\" (UniqueName: \"kubernetes.io/projected/69321e4b-4392-413f-839b-57040cd0a9bb-kube-api-access-gs2hj\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276008 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276027 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1dff77d-5e58-42e0-bfac-040973ea3094-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276045 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-etcd-client\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276063 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-node-pullsecrets\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276081 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-audit-policies\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276097 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-config\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276113 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276129 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-audit\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276145 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-trusted-ca-bundle\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276161 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-config\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276180 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276198 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94760384-fcfe-4f1e-bd84-aa310251260c-audit-dir\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276216 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b1cf44e-4593-4c6c-9a2c-d742840ec711-metrics-tls\") pod \"dns-operator-744455d44c-gmb7b\" (UID: \"4b1cf44e-4593-4c6c-9a2c-d742840ec711\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276232 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-audit-dir\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276264 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d4831f-857e-492e-b40a-d2f1a7b38780-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276280 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1459b817-2f82-48c8-8267-bdef187b4df9-serving-cert\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276299 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-machine-approver-tls\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276320 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1dff77d-5e58-42e0-bfac-040973ea3094-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276396 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcljj\" (UniqueName: \"kubernetes.io/projected/1459b817-2f82-48c8-8267-bdef187b4df9-kube-api-access-xcljj\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276416 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-encryption-config\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276471 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-trusted-ca\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276489 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276540 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnc6v\" (UniqueName: \"kubernetes.io/projected/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-kube-api-access-nnc6v\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276560 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276596 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vttl\" (UniqueName: \"kubernetes.io/projected/94760384-fcfe-4f1e-bd84-aa310251260c-kube-api-access-2vttl\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276618 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/395779b5-5c6e-45a6-8d06-361b72523703-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l2wzb\" (UID: \"395779b5-5c6e-45a6-8d06-361b72523703\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276637 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-etcd-client\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276700 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-oauth-config\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276760 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw66v\" (UniqueName: \"kubernetes.io/projected/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-kube-api-access-dw66v\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276782 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-kube-api-access-6kkgj\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276805 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-config\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276822 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-serving-cert\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276842 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d4831f-857e-492e-b40a-d2f1a7b38780-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276858 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276878 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1459b817-2f82-48c8-8267-bdef187b4df9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276895 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-oauth-serving-cert\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276913 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpgd\" (UniqueName: \"kubernetes.io/projected/e8d4831f-857e-492e-b40a-d2f1a7b38780-kube-api-access-bgpgd\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276932 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98gdb\" (UniqueName: \"kubernetes.io/projected/395779b5-5c6e-45a6-8d06-361b72523703-kube-api-access-98gdb\") pod \"cluster-samples-operator-665b6dd947-l2wzb\" (UID: \"395779b5-5c6e-45a6-8d06-361b72523703\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276949 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-image-import-ca\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276967 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.276987 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-tls\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.277046 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-service-ca\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.277065 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnqtf\" (UniqueName: \"kubernetes.io/projected/4b1cf44e-4593-4c6c-9a2c-d742840ec711-kube-api-access-wnqtf\") pod \"dns-operator-744455d44c-gmb7b\" (UID: \"4b1cf44e-4593-4c6c-9a2c-d742840ec711\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.277082 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-serving-cert\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.277098 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69321e4b-4392-413f-839b-57040cd0a9bb-serving-cert\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.277288 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:47:59.77727554 +0000 UTC m=+152.053247051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.313818 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.314890 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" event={"ID":"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff","Type":"ContainerStarted","Data":"fdc6fc75a1f514d7d4524fe336089c0a93d818ccfeb33dda26db278af90c399e"} Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.315821 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" event={"ID":"2c32e095-4835-4959-88e5-f061f89b5c41","Type":"ContainerStarted","Data":"de965750bd3d9b3d1984915e327c9bf413f67dc393b7c0cbac6607c679e27a5f"} Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.316766 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" event={"ID":"fe36423a-6685-4edb-b85f-f6aded8a37a7","Type":"ContainerStarted","Data":"f28f56130064f57b371970bc03473c73ac27ceeecbbdb8c7747082fd71971ac3"} Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.318375 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" event={"ID":"6e2b7db2-b2c4-4975-b84d-4772de0bae9c","Type":"ContainerStarted","Data":"79bc3d6dd65da696bf662b1b878cb485a4dad1f5dfd1e5fbe8a6e9e948077832"} Jan 28 15:47:59 crc kubenswrapper[4903]: W0128 15:47:59.325138 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f43563c_173f_4276_ac59_02fc755b6585.slice/crio-49c65416cad9c207aa297c0bd2540d4fc76cb2ab04eded387489ea5b54d6117b WatchSource:0}: Error finding container 49c65416cad9c207aa297c0bd2540d4fc76cb2ab04eded387489ea5b54d6117b: Status 404 returned error can't find the container with id 49c65416cad9c207aa297c0bd2540d4fc76cb2ab04eded387489ea5b54d6117b Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.346108 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378207 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.378375 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:47:59.878350342 +0000 UTC m=+152.154321853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378453 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-config\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378493 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-trusted-ca-bundle\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378550 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378577 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378622 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94760384-fcfe-4f1e-bd84-aa310251260c-audit-dir\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378638 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b1cf44e-4593-4c6c-9a2c-d742840ec711-metrics-tls\") pod \"dns-operator-744455d44c-gmb7b\" (UID: \"4b1cf44e-4593-4c6c-9a2c-d742840ec711\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378654 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-audit-dir\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378671 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsfkg\" (UniqueName: \"kubernetes.io/projected/df7cb6af-bde0-450e-a092-732c69105881-kube-api-access-dsfkg\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378686 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf6103ed-279b-4aed-846b-5437d8041540-metrics-tls\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378710 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bff9e5b8-162e-4335-9801-3419363a16a7-trusted-ca\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378725 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf6103ed-279b-4aed-846b-5437d8041540-trusted-ca\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378745 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2ph\" (UniqueName: \"kubernetes.io/projected/1782c794-6457-46e7-9ddb-547b000c6bf7-kube-api-access-4n2ph\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378812 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfpgc\" (UniqueName: \"kubernetes.io/projected/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-kube-api-access-rfpgc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378836 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2489fa1c-af9a-4082-a875-738a1c2fae88-proxy-tls\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378859 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcljj\" (UniqueName: \"kubernetes.io/projected/1459b817-2f82-48c8-8267-bdef187b4df9-kube-api-access-xcljj\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378880 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-encryption-config\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378909 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnc6v\" (UniqueName: \"kubernetes.io/projected/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-kube-api-access-nnc6v\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378931 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/39430551-2b2f-42ca-a36d-ddfea173a4df-node-bootstrap-token\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378954 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkl46\" (UniqueName: \"kubernetes.io/projected/9d22972e-928a-456e-9357-4693bb34d49d-kube-api-access-nkl46\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378976 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/395779b5-5c6e-45a6-8d06-361b72523703-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l2wzb\" (UID: \"395779b5-5c6e-45a6-8d06-361b72523703\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.378996 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-etcd-client\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379017 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44fw\" (UniqueName: \"kubernetes.io/projected/39430551-2b2f-42ca-a36d-ddfea173a4df-kube-api-access-x44fw\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379040 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnj6w\" (UniqueName: \"kubernetes.io/projected/25dd11d8-a217-40ac-8d11-03b28106776c-kube-api-access-gnj6w\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379055 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4tw\" (UniqueName: \"kubernetes.io/projected/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-kube-api-access-zh4tw\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379075 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/554d4a29-2a6d-44cf-a4a9-641478e299d9-signing-cabundle\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379096 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf6103ed-279b-4aed-846b-5437d8041540-bound-sa-token\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379150 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-policies\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379166 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-dir\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379189 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d4831f-857e-492e-b40a-d2f1a7b38780-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379208 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-serving-cert\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379226 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379251 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2skp\" (UniqueName: \"kubernetes.io/projected/440d0fa6-743a-46f6-843a-f3af8e9ec321-kube-api-access-s2skp\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379276 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-oauth-serving-cert\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379293 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379310 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-image-import-ca\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379338 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379352 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379367 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1782c794-6457-46e7-9ddb-547b000c6bf7-proxy-tls\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379382 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06307fc1-5240-40a9-893d-e302e487fce2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379398 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q26vp\" (UniqueName: \"kubernetes.io/projected/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-kube-api-access-q26vp\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379414 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-service-ca\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379431 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnqtf\" (UniqueName: \"kubernetes.io/projected/4b1cf44e-4593-4c6c-9a2c-d742840ec711-kube-api-access-wnqtf\") pod \"dns-operator-744455d44c-gmb7b\" (UID: \"4b1cf44e-4593-4c6c-9a2c-d742840ec711\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379449 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-registration-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379476 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69321e4b-4392-413f-839b-57040cd0a9bb-serving-cert\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379492 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d22972e-928a-456e-9357-4693bb34d49d-apiservice-cert\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379518 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-metrics-certs\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379554 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379569 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/39430551-2b2f-42ca-a36d-ddfea173a4df-certs\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379596 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cb6af-bde0-450e-a092-732c69105881-config\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379611 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4lzs\" (UniqueName: \"kubernetes.io/projected/331fb96f-546c-4218-9f5b-6a358daf2f16-kube-api-access-f4lzs\") pod \"migrator-59844c95c7-4m49s\" (UID: \"331fb96f-546c-4218-9f5b-6a358daf2f16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379628 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-encryption-config\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379643 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nh6z\" (UniqueName: \"kubernetes.io/projected/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-kube-api-access-7nh6z\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379914 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ls2\" (UniqueName: \"kubernetes.io/projected/7afbeb7b-ff1e-40bf-903c-64e61eb493d7-kube-api-access-95ls2\") pod \"package-server-manager-789f6589d5-vxzhf\" (UID: \"7afbeb7b-ff1e-40bf-903c-64e61eb493d7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379956 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcm67\" (UniqueName: \"kubernetes.io/projected/391b7add-cc22-451b-a87a-8130bb8924cb-kube-api-access-vcm67\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.379994 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380064 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380093 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-certificates\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380112 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdvn\" (UniqueName: \"kubernetes.io/projected/a1c4af21-1253-4476-8f98-98377ab79e81-kube-api-access-hwdvn\") pod \"downloads-7954f5f757-tcmkg\" (UID: \"a1c4af21-1253-4476-8f98-98377ab79e81\") " pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380133 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2489fa1c-af9a-4082-a875-738a1c2fae88-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380151 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbskn\" (UniqueName: \"kubernetes.io/projected/2489fa1c-af9a-4082-a875-738a1c2fae88-kube-api-access-lbskn\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380172 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-mountpoint-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380192 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dpdv\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-kube-api-access-9dpdv\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380210 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9d22972e-928a-456e-9357-4693bb34d49d-tmpfs\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380237 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380254 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh55\" (UniqueName: \"kubernetes.io/projected/8c895b2d-4baa-40f3-b942-9a64cd93f395-kube-api-access-2vh55\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8r4q\" (UID: \"8c895b2d-4baa-40f3-b942-9a64cd93f395\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380273 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-auth-proxy-config\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380291 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-etcd-serving-ca\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380332 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-serving-cert\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380353 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2hj\" (UniqueName: \"kubernetes.io/projected/69321e4b-4392-413f-839b-57040cd0a9bb-kube-api-access-gs2hj\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380377 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380403 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7vn\" (UniqueName: \"kubernetes.io/projected/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-kube-api-access-2v7vn\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380441 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1dff77d-5e58-42e0-bfac-040973ea3094-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380457 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-etcd-client\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380477 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-config\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380498 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfq4\" (UniqueName: \"kubernetes.io/projected/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-kube-api-access-zcfq4\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380515 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-audit-policies\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380590 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380609 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380626 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-audit\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380643 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbcrt\" (UniqueName: \"kubernetes.io/projected/bf6103ed-279b-4aed-846b-5437d8041540-kube-api-access-wbcrt\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380720 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d4831f-857e-492e-b40a-d2f1a7b38780-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380738 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1459b817-2f82-48c8-8267-bdef187b4df9-serving-cert\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380755 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-machine-approver-tls\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380775 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9e5b8-162e-4335-9801-3419363a16a7-serving-cert\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380807 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d22972e-928a-456e-9357-4693bb34d49d-webhook-cert\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380822 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff9e5b8-162e-4335-9801-3419363a16a7-config\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.380875 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-config\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.381021 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8d4831f-857e-492e-b40a-d2f1a7b38780-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.381302 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-service-ca\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.381375 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-oauth-serving-cert\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.381852 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1dff77d-5e58-42e0-bfac-040973ea3094-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.381914 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94760384-fcfe-4f1e-bd84-aa310251260c-audit-dir\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.381999 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-etcd-serving-ca\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.382107 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-certificates\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.382878 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:47:59.88286745 +0000 UTC m=+152.158838961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383226 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-config\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383300 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-audit-dir\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383342 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1dff77d-5e58-42e0-bfac-040973ea3094-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383389 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-trusted-ca\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383409 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383431 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-stats-auth\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383450 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1782c794-6457-46e7-9ddb-547b000c6bf7-images\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383515 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383557 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-socket-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383566 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-auth-proxy-config\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383577 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383642 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-oauth-config\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383672 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vttl\" (UniqueName: \"kubernetes.io/projected/94760384-fcfe-4f1e-bd84-aa310251260c-kube-api-access-2vttl\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383703 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06307fc1-5240-40a9-893d-e302e487fce2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383734 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-kube-api-access-6kkgj\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383758 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw66v\" (UniqueName: \"kubernetes.io/projected/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-kube-api-access-dw66v\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383785 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383817 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383866 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-config\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.383934 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-trusted-ca-bundle\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384310 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384471 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4da13ff-7bf6-42cf-a5bb-352f453abdb4-cert\") pod \"ingress-canary-9cpcp\" (UID: \"a4da13ff-7bf6-42cf-a5bb-352f453abdb4\") " pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384502 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384551 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384578 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384603 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs79m\" (UniqueName: \"kubernetes.io/projected/bff9e5b8-162e-4335-9801-3419363a16a7-kube-api-access-vs79m\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384628 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06307fc1-5240-40a9-893d-e302e487fce2-config\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384658 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-plugins-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384687 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1459b817-2f82-48c8-8267-bdef187b4df9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384716 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98gdb\" (UniqueName: \"kubernetes.io/projected/395779b5-5c6e-45a6-8d06-361b72523703-kube-api-access-98gdb\") pod \"cluster-samples-operator-665b6dd947-l2wzb\" (UID: \"395779b5-5c6e-45a6-8d06-361b72523703\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384742 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpgd\" (UniqueName: \"kubernetes.io/projected/e8d4831f-857e-492e-b40a-d2f1a7b38780-kube-api-access-bgpgd\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384770 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384798 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/391b7add-cc22-451b-a87a-8130bb8924cb-config-volume\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384833 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-tls\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384857 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.384884 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-serving-cert\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.385458 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-encryption-config\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.385605 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-etcd-client\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.385927 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.386082 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.386293 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-trusted-ca\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.386369 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-audit-policies\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.386657 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-audit\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.386697 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-config\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.387167 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.387327 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.387774 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69321e4b-4392-413f-839b-57040cd0a9bb-serving-cert\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.387895 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/440d0fa6-743a-46f6-843a-f3af8e9ec321-srv-cert\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388013 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/729ae87f-e430-460d-a99c-7b65c5e0f71c-metrics-tls\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388107 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-bound-sa-token\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388159 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afbeb7b-ff1e-40bf-903c-64e61eb493d7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vxzhf\" (UID: \"7afbeb7b-ff1e-40bf-903c-64e61eb493d7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388309 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df7cb6af-bde0-450e-a092-732c69105881-serving-cert\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388365 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c895b2d-4baa-40f3-b942-9a64cd93f395-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8r4q\" (UID: \"8c895b2d-4baa-40f3-b942-9a64cd93f395\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388408 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjbj\" (UniqueName: \"kubernetes.io/projected/729ae87f-e430-460d-a99c-7b65c5e0f71c-kube-api-access-vxjbj\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388423 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1459b817-2f82-48c8-8267-bdef187b4df9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388443 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qf5\" (UniqueName: \"kubernetes.io/projected/a4da13ff-7bf6-42cf-a5bb-352f453abdb4-kube-api-access-h7qf5\") pod \"ingress-canary-9cpcp\" (UID: \"a4da13ff-7bf6-42cf-a5bb-352f453abdb4\") " pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388743 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-srv-cert\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.388827 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389018 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-service-ca-bundle\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389032 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-machine-approver-tls\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389081 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1459b817-2f82-48c8-8267-bdef187b4df9-serving-cert\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389282 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-serving-cert\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389361 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/554d4a29-2a6d-44cf-a4a9-641478e299d9-signing-key\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389394 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729ae87f-e430-460d-a99c-7b65c5e0f71c-config-volume\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389441 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-config\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389467 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-default-certificate\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389584 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25dd11d8-a217-40ac-8d11-03b28106776c-service-ca-bundle\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389624 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkw8\" (UniqueName: \"kubernetes.io/projected/554d4a29-2a6d-44cf-a4a9-641478e299d9-kube-api-access-2qkw8\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389652 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/391b7add-cc22-451b-a87a-8130bb8924cb-secret-volume\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389674 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-encryption-config\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389696 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69321e4b-4392-413f-839b-57040cd0a9bb-service-ca-bundle\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389729 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-trusted-ca-bundle\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389787 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-csi-data-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.389906 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390012 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/440d0fa6-743a-46f6-843a-f3af8e9ec321-profile-collector-cert\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390039 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1782c794-6457-46e7-9ddb-547b000c6bf7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390073 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-node-pullsecrets\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390244 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-node-pullsecrets\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390343 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/395779b5-5c6e-45a6-8d06-361b72523703-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-l2wzb\" (UID: \"395779b5-5c6e-45a6-8d06-361b72523703\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390588 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b1cf44e-4593-4c6c-9a2c-d742840ec711-metrics-tls\") pod \"dns-operator-744455d44c-gmb7b\" (UID: \"4b1cf44e-4593-4c6c-9a2c-d742840ec711\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390623 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-config\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390657 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94760384-fcfe-4f1e-bd84-aa310251260c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.390974 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-trusted-ca-bundle\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.391546 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8d4831f-857e-492e-b40a-d2f1a7b38780-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.391643 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-oauth-config\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.391876 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-tls\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.391944 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.394083 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94760384-fcfe-4f1e-bd84-aa310251260c-serving-cert\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.394719 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1dff77d-5e58-42e0-bfac-040973ea3094-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.396718 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-serving-cert\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.400558 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-etcd-client\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.400708 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-image-import-ca\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.426198 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcljj\" (UniqueName: \"kubernetes.io/projected/1459b817-2f82-48c8-8267-bdef187b4df9-kube-api-access-xcljj\") pod \"openshift-config-operator-7777fb866f-pvtfk\" (UID: \"1459b817-2f82-48c8-8267-bdef187b4df9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.446623 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nh6z\" (UniqueName: \"kubernetes.io/projected/cfd6fe6c-cdb9-4b41-a9f4-e245780116be-kube-api-access-7nh6z\") pod \"apiserver-76f77b778f-48dgn\" (UID: \"cfd6fe6c-cdb9-4b41-a9f4-e245780116be\") " pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.463137 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jn67q"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.466106 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnqtf\" (UniqueName: \"kubernetes.io/projected/4b1cf44e-4593-4c6c-9a2c-d742840ec711-kube-api-access-wnqtf\") pod \"dns-operator-744455d44c-gmb7b\" (UID: \"4b1cf44e-4593-4c6c-9a2c-d742840ec711\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:59 crc kubenswrapper[4903]: W0128 15:47:59.470978 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92ef8d59_61e9_4e51_97ca_58f14e72535f.slice/crio-eb69142cbe5c2d4435c21c3a569319c386bb6bb3d0071ef3203b6edcd5af6656 WatchSource:0}: Error finding container eb69142cbe5c2d4435c21c3a569319c386bb6bb3d0071ef3203b6edcd5af6656: Status 404 returned error can't find the container with id eb69142cbe5c2d4435c21c3a569319c386bb6bb3d0071ef3203b6edcd5af6656 Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.487834 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdvn\" (UniqueName: \"kubernetes.io/projected/a1c4af21-1253-4476-8f98-98377ab79e81-kube-api-access-hwdvn\") pod \"downloads-7954f5f757-tcmkg\" (UID: \"a1c4af21-1253-4476-8f98-98377ab79e81\") " pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.491716 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.491844 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:47:59.991829526 +0000 UTC m=+152.267801037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.491926 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.491954 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.491971 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4da13ff-7bf6-42cf-a5bb-352f453abdb4-cert\") pod \"ingress-canary-9cpcp\" (UID: \"a4da13ff-7bf6-42cf-a5bb-352f453abdb4\") " pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.491986 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492002 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492017 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs79m\" (UniqueName: \"kubernetes.io/projected/bff9e5b8-162e-4335-9801-3419363a16a7-kube-api-access-vs79m\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492033 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06307fc1-5240-40a9-893d-e302e487fce2-config\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492048 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-plugins-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492073 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/391b7add-cc22-451b-a87a-8130bb8924cb-config-volume\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492088 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492104 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492121 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/729ae87f-e430-460d-a99c-7b65c5e0f71c-metrics-tls\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492136 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/440d0fa6-743a-46f6-843a-f3af8e9ec321-srv-cert\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492152 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afbeb7b-ff1e-40bf-903c-64e61eb493d7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vxzhf\" (UID: \"7afbeb7b-ff1e-40bf-903c-64e61eb493d7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492172 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df7cb6af-bde0-450e-a092-732c69105881-serving-cert\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492188 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c895b2d-4baa-40f3-b942-9a64cd93f395-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8r4q\" (UID: \"8c895b2d-4baa-40f3-b942-9a64cd93f395\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492204 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qf5\" (UniqueName: \"kubernetes.io/projected/a4da13ff-7bf6-42cf-a5bb-352f453abdb4-kube-api-access-h7qf5\") pod \"ingress-canary-9cpcp\" (UID: \"a4da13ff-7bf6-42cf-a5bb-352f453abdb4\") " pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492218 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxjbj\" (UniqueName: \"kubernetes.io/projected/729ae87f-e430-460d-a99c-7b65c5e0f71c-kube-api-access-vxjbj\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492234 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-srv-cert\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492249 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492263 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/554d4a29-2a6d-44cf-a4a9-641478e299d9-signing-key\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492277 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729ae87f-e430-460d-a99c-7b65c5e0f71c-config-volume\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492291 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25dd11d8-a217-40ac-8d11-03b28106776c-service-ca-bundle\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492306 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkw8\" (UniqueName: \"kubernetes.io/projected/554d4a29-2a6d-44cf-a4a9-641478e299d9-kube-api-access-2qkw8\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492320 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/391b7add-cc22-451b-a87a-8130bb8924cb-secret-volume\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492334 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-default-certificate\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492347 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-csi-data-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492362 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/440d0fa6-743a-46f6-843a-f3af8e9ec321-profile-collector-cert\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492380 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1782c794-6457-46e7-9ddb-547b000c6bf7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492396 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492411 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492428 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsfkg\" (UniqueName: \"kubernetes.io/projected/df7cb6af-bde0-450e-a092-732c69105881-kube-api-access-dsfkg\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492443 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf6103ed-279b-4aed-846b-5437d8041540-metrics-tls\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492457 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bff9e5b8-162e-4335-9801-3419363a16a7-trusted-ca\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492473 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf6103ed-279b-4aed-846b-5437d8041540-trusted-ca\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492487 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2ph\" (UniqueName: \"kubernetes.io/projected/1782c794-6457-46e7-9ddb-547b000c6bf7-kube-api-access-4n2ph\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492505 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfpgc\" (UniqueName: \"kubernetes.io/projected/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-kube-api-access-rfpgc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492519 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2489fa1c-af9a-4082-a875-738a1c2fae88-proxy-tls\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492580 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/39430551-2b2f-42ca-a36d-ddfea173a4df-node-bootstrap-token\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492605 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkl46\" (UniqueName: \"kubernetes.io/projected/9d22972e-928a-456e-9357-4693bb34d49d-kube-api-access-nkl46\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492632 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44fw\" (UniqueName: \"kubernetes.io/projected/39430551-2b2f-42ca-a36d-ddfea173a4df-kube-api-access-x44fw\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492653 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnj6w\" (UniqueName: \"kubernetes.io/projected/25dd11d8-a217-40ac-8d11-03b28106776c-kube-api-access-gnj6w\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492674 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4tw\" (UniqueName: \"kubernetes.io/projected/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-kube-api-access-zh4tw\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492695 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/554d4a29-2a6d-44cf-a4a9-641478e299d9-signing-cabundle\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492717 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf6103ed-279b-4aed-846b-5437d8041540-bound-sa-token\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492737 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-policies\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492756 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-dir\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492779 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492804 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2skp\" (UniqueName: \"kubernetes.io/projected/440d0fa6-743a-46f6-843a-f3af8e9ec321-kube-api-access-s2skp\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492827 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492851 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492872 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06307fc1-5240-40a9-893d-e302e487fce2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492893 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q26vp\" (UniqueName: \"kubernetes.io/projected/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-kube-api-access-q26vp\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492914 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1782c794-6457-46e7-9ddb-547b000c6bf7-proxy-tls\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492935 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-registration-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492957 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d22972e-928a-456e-9357-4693bb34d49d-apiservice-cert\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492979 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.492999 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/39430551-2b2f-42ca-a36d-ddfea173a4df-certs\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493020 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-metrics-certs\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493039 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cb6af-bde0-450e-a092-732c69105881-config\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493061 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4lzs\" (UniqueName: \"kubernetes.io/projected/331fb96f-546c-4218-9f5b-6a358daf2f16-kube-api-access-f4lzs\") pod \"migrator-59844c95c7-4m49s\" (UID: \"331fb96f-546c-4218-9f5b-6a358daf2f16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493115 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ls2\" (UniqueName: \"kubernetes.io/projected/7afbeb7b-ff1e-40bf-903c-64e61eb493d7-kube-api-access-95ls2\") pod \"package-server-manager-789f6589d5-vxzhf\" (UID: \"7afbeb7b-ff1e-40bf-903c-64e61eb493d7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493136 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcm67\" (UniqueName: \"kubernetes.io/projected/391b7add-cc22-451b-a87a-8130bb8924cb-kube-api-access-vcm67\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493151 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493155 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493193 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2489fa1c-af9a-4082-a875-738a1c2fae88-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493208 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbskn\" (UniqueName: \"kubernetes.io/projected/2489fa1c-af9a-4082-a875-738a1c2fae88-kube-api-access-lbskn\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493226 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-mountpoint-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493241 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9d22972e-928a-456e-9357-4693bb34d49d-tmpfs\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493261 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493283 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vh55\" (UniqueName: \"kubernetes.io/projected/8c895b2d-4baa-40f3-b942-9a64cd93f395-kube-api-access-2vh55\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8r4q\" (UID: \"8c895b2d-4baa-40f3-b942-9a64cd93f395\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493305 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493320 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v7vn\" (UniqueName: \"kubernetes.io/projected/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-kube-api-access-2v7vn\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493337 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfq4\" (UniqueName: \"kubernetes.io/projected/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-kube-api-access-zcfq4\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493363 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbcrt\" (UniqueName: \"kubernetes.io/projected/bf6103ed-279b-4aed-846b-5437d8041540-kube-api-access-wbcrt\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493404 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9e5b8-162e-4335-9801-3419363a16a7-serving-cert\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493428 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d22972e-928a-456e-9357-4693bb34d49d-webhook-cert\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493451 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff9e5b8-162e-4335-9801-3419363a16a7-config\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493472 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-stats-auth\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493489 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493504 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1782c794-6457-46e7-9ddb-547b000c6bf7-images\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493519 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-socket-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.493853 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06307fc1-5240-40a9-893d-e302e487fce2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.494703 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bff9e5b8-162e-4335-9801-3419363a16a7-trusted-ca\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.496085 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf6103ed-279b-4aed-846b-5437d8041540-metrics-tls\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.496169 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.496696 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06307fc1-5240-40a9-893d-e302e487fce2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.498132 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.498586 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.499031 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a4da13ff-7bf6-42cf-a5bb-352f453abdb4-cert\") pod \"ingress-canary-9cpcp\" (UID: \"a4da13ff-7bf6-42cf-a5bb-352f453abdb4\") " pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.499155 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.499644 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.500201 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.500918 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2489fa1c-af9a-4082-a875-738a1c2fae88-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.501217 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-mountpoint-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.501557 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.501597 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9d22972e-928a-456e-9357-4693bb34d49d-tmpfs\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.501860 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.00184532 +0000 UTC m=+152.277816891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.502194 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-registration-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.503676 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7cb6af-bde0-450e-a092-732c69105881-config\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.503941 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06307fc1-5240-40a9-893d-e302e487fce2-config\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.504023 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-plugins-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.504424 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.504713 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/391b7add-cc22-451b-a87a-8130bb8924cb-config-volume\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.505308 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/729ae87f-e430-460d-a99c-7b65c5e0f71c-config-volume\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.506324 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1782c794-6457-46e7-9ddb-547b000c6bf7-images\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.506371 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff9e5b8-162e-4335-9801-3419363a16a7-config\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.506811 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-socket-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.507614 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.506376 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.507750 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.507854 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1782c794-6457-46e7-9ddb-547b000c6bf7-proxy-tls\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.508283 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-dir\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.508732 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-csi-data-dir\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.509399 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/554d4a29-2a6d-44cf-a4a9-641478e299d9-signing-key\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.510176 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df7cb6af-bde0-450e-a092-732c69105881-serving-cert\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.510281 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c895b2d-4baa-40f3-b942-9a64cd93f395-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8r4q\" (UID: \"8c895b2d-4baa-40f3-b942-9a64cd93f395\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.510476 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bff9e5b8-162e-4335-9801-3419363a16a7-serving-cert\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.511107 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/729ae87f-e430-460d-a99c-7b65c5e0f71c-metrics-tls\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.511958 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnc6v\" (UniqueName: \"kubernetes.io/projected/1cbaa640-07e4-402d-80d3-bb4bc85c9ec5-kube-api-access-nnc6v\") pod \"openshift-controller-manager-operator-756b6f6bc6-krlbv\" (UID: \"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.512573 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/391b7add-cc22-451b-a87a-8130bb8924cb-secret-volume\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.512596 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d22972e-928a-456e-9357-4693bb34d49d-webhook-cert\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.512781 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/440d0fa6-743a-46f6-843a-f3af8e9ec321-srv-cert\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.513432 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.513579 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7afbeb7b-ff1e-40bf-903c-64e61eb493d7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vxzhf\" (UID: \"7afbeb7b-ff1e-40bf-903c-64e61eb493d7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.514251 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/440d0fa6-743a-46f6-843a-f3af8e9ec321-profile-collector-cert\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.515089 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-srv-cert\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.517613 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d22972e-928a-456e-9357-4693bb34d49d-apiservice-cert\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.526193 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dpdv\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-kube-api-access-9dpdv\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.537501 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/554d4a29-2a6d-44cf-a4a9-641478e299d9-signing-cabundle\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.537682 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf6103ed-279b-4aed-846b-5437d8041540-trusted-ca\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.538315 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-policies\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.538638 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2489fa1c-af9a-4082-a875-738a1c2fae88-proxy-tls\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.539178 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25dd11d8-a217-40ac-8d11-03b28106776c-service-ca-bundle\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.539188 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1782c794-6457-46e7-9ddb-547b000c6bf7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.539304 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.539653 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.539837 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.540205 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-metrics-certs\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.540386 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.542132 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-default-certificate\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.542667 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/25dd11d8-a217-40ac-8d11-03b28106776c-stats-auth\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.543042 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/39430551-2b2f-42ca-a36d-ddfea173a4df-certs\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.543494 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/39430551-2b2f-42ca-a36d-ddfea173a4df-node-bootstrap-token\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.546332 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2hj\" (UniqueName: \"kubernetes.io/projected/69321e4b-4392-413f-839b-57040cd0a9bb-kube-api-access-gs2hj\") pod \"authentication-operator-69f744f599-5ddtc\" (UID: \"69321e4b-4392-413f-839b-57040cd0a9bb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.566603 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mnp5j\" (UID: \"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.589417 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.589603 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vttl\" (UniqueName: \"kubernetes.io/projected/94760384-fcfe-4f1e-bd84-aa310251260c-kube-api-access-2vttl\") pod \"apiserver-7bbb656c7d-279w4\" (UID: \"94760384-fcfe-4f1e-bd84-aa310251260c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.594777 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.594932 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.094907844 +0000 UTC m=+152.370879345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.595133 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.595574 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.095561823 +0000 UTC m=+152.371533334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.611396 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-kube-api-access-6kkgj\") pod \"console-f9d7485db-522t5\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.627967 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw66v\" (UniqueName: \"kubernetes.io/projected/8bb1df7f-1aea-4d75-b905-d87e7d34c27b-kube-api-access-dw66v\") pod \"machine-approver-56656f9798-6c24v\" (UID: \"8bb1df7f-1aea-4d75-b905-d87e7d34c27b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.635149 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.650665 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98gdb\" (UniqueName: \"kubernetes.io/projected/395779b5-5c6e-45a6-8d06-361b72523703-kube-api-access-98gdb\") pod \"cluster-samples-operator-665b6dd947-l2wzb\" (UID: \"395779b5-5c6e-45a6-8d06-361b72523703\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.686555 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-bound-sa-token\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.689885 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpgd\" (UniqueName: \"kubernetes.io/projected/e8d4831f-857e-492e-b40a-d2f1a7b38780-kube-api-access-bgpgd\") pod \"openshift-apiserver-operator-796bbdcf4f-8s7l4\" (UID: \"e8d4831f-857e-492e-b40a-d2f1a7b38780\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.696167 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.696297 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.196271285 +0000 UTC m=+152.472242796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.696655 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.697004 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.196994745 +0000 UTC m=+152.472966336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.707988 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.729708 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkl46\" (UniqueName: \"kubernetes.io/projected/9d22972e-928a-456e-9357-4693bb34d49d-kube-api-access-nkl46\") pod \"packageserver-d55dfcdfc-mvn4x\" (UID: \"9d22972e-928a-456e-9357-4693bb34d49d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.732421 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.759755 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2skp\" (UniqueName: \"kubernetes.io/projected/440d0fa6-743a-46f6-843a-f3af8e9ec321-kube-api-access-s2skp\") pod \"catalog-operator-68c6474976-zlvqj\" (UID: \"440d0fa6-743a-46f6-843a-f3af8e9ec321\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.760064 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.762683 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.793782 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-48dgn"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.798259 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnj6w\" (UniqueName: \"kubernetes.io/projected/25dd11d8-a217-40ac-8d11-03b28106776c-kube-api-access-gnj6w\") pod \"router-default-5444994796-kr4qg\" (UID: \"25dd11d8-a217-40ac-8d11-03b28106776c\") " pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.798591 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.799557 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.800245 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.300221718 +0000 UTC m=+152.576193229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.816811 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.839349 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.839371 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.843442 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.844420 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4tw\" (UniqueName: \"kubernetes.io/projected/85bc5bb3-c08a-4c3a-b3d2-d33397a073ba-kube-api-access-zh4tw\") pod \"csi-hostpathplugin-fz85j\" (UID: \"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba\") " pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.845109 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06307fc1-5240-40a9-893d-e302e487fce2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4brk9\" (UID: \"06307fc1-5240-40a9-893d-e302e487fce2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.848882 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44fw\" (UniqueName: \"kubernetes.io/projected/39430551-2b2f-42ca-a36d-ddfea173a4df-kube-api-access-x44fw\") pod \"machine-config-server-xgnx2\" (UID: \"39430551-2b2f-42ca-a36d-ddfea173a4df\") " pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.862958 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q26vp\" (UniqueName: \"kubernetes.io/projected/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-kube-api-access-q26vp\") pod \"oauth-openshift-558db77b4-dqbbb\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.867180 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tcmkg"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.879076 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbskn\" (UniqueName: \"kubernetes.io/projected/2489fa1c-af9a-4082-a875-738a1c2fae88-kube-api-access-lbskn\") pod \"machine-config-controller-84d6567774-6lfp8\" (UID: \"2489fa1c-af9a-4082-a875-738a1c2fae88\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.879284 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmb7b"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.883599 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.903326 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vh55\" (UniqueName: \"kubernetes.io/projected/8c895b2d-4baa-40f3-b942-9a64cd93f395-kube-api-access-2vh55\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8r4q\" (UID: \"8c895b2d-4baa-40f3-b942-9a64cd93f395\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.903570 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:47:59 crc kubenswrapper[4903]: E0128 15:47:59.903888 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.403876314 +0000 UTC m=+152.679847825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.909181 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfpgc\" (UniqueName: \"kubernetes.io/projected/4a1feaa8-6d8a-44d3-ab2f-22e1571f175e-kube-api-access-rfpgc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6xcxh\" (UID: \"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.918319 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.925878 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.929618 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2ph\" (UniqueName: \"kubernetes.io/projected/1782c794-6457-46e7-9ddb-547b000c6bf7-kube-api-access-4n2ph\") pod \"machine-config-operator-74547568cd-tpxrl\" (UID: \"1782c794-6457-46e7-9ddb-547b000c6bf7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.950586 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs79m\" (UniqueName: \"kubernetes.io/projected/bff9e5b8-162e-4335-9801-3419363a16a7-kube-api-access-vs79m\") pod \"console-operator-58897d9998-zxr6z\" (UID: \"bff9e5b8-162e-4335-9801-3419363a16a7\") " pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.959994 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.973094 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ls2\" (UniqueName: \"kubernetes.io/projected/7afbeb7b-ff1e-40bf-903c-64e61eb493d7-kube-api-access-95ls2\") pod \"package-server-manager-789f6589d5-vxzhf\" (UID: \"7afbeb7b-ff1e-40bf-903c-64e61eb493d7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.982985 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-522t5"] Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.984325 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.995302 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v7vn\" (UniqueName: \"kubernetes.io/projected/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-kube-api-access-2v7vn\") pod \"marketplace-operator-79b997595-fp7dl\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:47:59 crc kubenswrapper[4903]: I0128 15:47:59.995497 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.004108 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.004893 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.504873633 +0000 UTC m=+152.780845144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.017204 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.031170 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4lzs\" (UniqueName: \"kubernetes.io/projected/331fb96f-546c-4218-9f5b-6a358daf2f16-kube-api-access-f4lzs\") pod \"migrator-59844c95c7-4m49s\" (UID: \"331fb96f-546c-4218-9f5b-6a358daf2f16\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.036119 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.037511 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfq4\" (UniqueName: \"kubernetes.io/projected/ab8e1a4a-7e88-4f51-96e3-52f6c6310170-kube-api-access-zcfq4\") pod \"olm-operator-6b444d44fb-bjxkj\" (UID: \"ab8e1a4a-7e88-4f51-96e3-52f6c6310170\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.044742 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.046964 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.054235 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcm67\" (UniqueName: \"kubernetes.io/projected/391b7add-cc22-451b-a87a-8130bb8924cb-kube-api-access-vcm67\") pod \"collect-profiles-29493585-tx6j9\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.065697 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.069502 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbcrt\" (UniqueName: \"kubernetes.io/projected/bf6103ed-279b-4aed-846b-5437d8041540-kube-api-access-wbcrt\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.090636 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkw8\" (UniqueName: \"kubernetes.io/projected/554d4a29-2a6d-44cf-a4a9-641478e299d9-kube-api-access-2qkw8\") pod \"service-ca-9c57cc56f-bp7hn\" (UID: \"554d4a29-2a6d-44cf-a4a9-641478e299d9\") " pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.098461 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.107491 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xgnx2" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.108508 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.108820 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.608806126 +0000 UTC m=+152.884777637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.110817 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qf5\" (UniqueName: \"kubernetes.io/projected/a4da13ff-7bf6-42cf-a5bb-352f453abdb4-kube-api-access-h7qf5\") pod \"ingress-canary-9cpcp\" (UID: \"a4da13ff-7bf6-42cf-a5bb-352f453abdb4\") " pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.125372 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxjbj\" (UniqueName: \"kubernetes.io/projected/729ae87f-e430-460d-a99c-7b65c5e0f71c-kube-api-access-vxjbj\") pod \"dns-default-8v8wj\" (UID: \"729ae87f-e430-460d-a99c-7b65c5e0f71c\") " pod="openshift-dns/dns-default-8v8wj" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.140420 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.150601 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.153592 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf6103ed-279b-4aed-846b-5437d8041540-bound-sa-token\") pod \"ingress-operator-5b745b69d9-x5qnz\" (UID: \"bf6103ed-279b-4aed-846b-5437d8041540\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.175267 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsfkg\" (UniqueName: \"kubernetes.io/projected/df7cb6af-bde0-450e-a092-732c69105881-kube-api-access-dsfkg\") pod \"service-ca-operator-777779d784-88mbt\" (UID: \"df7cb6af-bde0-450e-a092-732c69105881\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.192022 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.210223 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.210440 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.210807 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.710789204 +0000 UTC m=+152.986760715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.232929 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.240841 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.252650 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.273702 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.307373 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.313264 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.313798 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.813777581 +0000 UTC m=+153.089749132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: W0128 15:48:00.319742 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d22972e_928a_456e_9357_4693bb34d49d.slice/crio-dcb4a55f72460085114c29261d3e32e8dfe3e4018f9ad48c6cb153ac13f41b08 WatchSource:0}: Error finding container dcb4a55f72460085114c29261d3e32e8dfe3e4018f9ad48c6cb153ac13f41b08: Status 404 returned error can't find the container with id dcb4a55f72460085114c29261d3e32e8dfe3e4018f9ad48c6cb153ac13f41b08 Jan 28 15:48:00 crc kubenswrapper[4903]: W0128 15:48:00.320730 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cbaa640_07e4_402d_80d3_bb4bc85c9ec5.slice/crio-e3195b52eaad4280b4af2e3b96ac915e18a64b37e1a1781b12e5ca10e02065c8 WatchSource:0}: Error finding container e3195b52eaad4280b4af2e3b96ac915e18a64b37e1a1781b12e5ca10e02065c8: Status 404 returned error can't find the container with id e3195b52eaad4280b4af2e3b96ac915e18a64b37e1a1781b12e5ca10e02065c8 Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.326220 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.352013 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" event={"ID":"92ef8d59-61e9-4e51-97ca-58f14e72535f","Type":"ContainerStarted","Data":"27f4c300418391b4e8a291b6e57a95f26db3412e68c88f949c37cbbd625f36c9"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.352064 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" event={"ID":"92ef8d59-61e9-4e51-97ca-58f14e72535f","Type":"ContainerStarted","Data":"eb69142cbe5c2d4435c21c3a569319c386bb6bb3d0071ef3203b6edcd5af6656"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.353481 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" event={"ID":"9d22972e-928a-456e-9357-4693bb34d49d","Type":"ContainerStarted","Data":"dcb4a55f72460085114c29261d3e32e8dfe3e4018f9ad48c6cb153ac13f41b08"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.356720 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" event={"ID":"fe36423a-6685-4edb-b85f-f6aded8a37a7","Type":"ContainerStarted","Data":"f49947187c8b4bb1a427fd03c51714f881f58c0d287014c6ef1ace92b9a07afd"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.360859 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" event={"ID":"6e2b7db2-b2c4-4975-b84d-4772de0bae9c","Type":"ContainerStarted","Data":"cd87a2a13959c173e2144303bc31f0d349184e6c8e03fab01996331160db973b"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.362008 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcmkg" event={"ID":"a1c4af21-1253-4476-8f98-98377ab79e81","Type":"ContainerStarted","Data":"3ea62c0d6413abba290c08dadd6506924fd4471e268401d9979eee73c643a796"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.366193 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" event={"ID":"1459b817-2f82-48c8-8267-bdef187b4df9","Type":"ContainerStarted","Data":"9b22266a0e8f9bbb82bdb905ca3f70cad55207e8faccee5c1c561e9e62467234"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.373886 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" event={"ID":"cfd6fe6c-cdb9-4b41-a9f4-e245780116be","Type":"ContainerStarted","Data":"289d50e839f8d0fac4b1a05c439a860c6024022790b409500ffb168d2684b82e"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.376089 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" event={"ID":"e03c0f97-6757-450b-a33d-d76ba42fd4b7","Type":"ContainerStarted","Data":"b73dfec05d9715e00d10ce88a323913fc671fbb01dbb6b0432f71b501fe330c3"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.376142 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" event={"ID":"e03c0f97-6757-450b-a33d-d76ba42fd4b7","Type":"ContainerStarted","Data":"b990502a021c29b7bd64e5f3a24acd7d243b6e8864b1f9007dd5f54b219e8808"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.379142 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-522t5" event={"ID":"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1","Type":"ContainerStarted","Data":"ae869a673bd64e3fa482272fc8392d191a8386a216c291adeea238a183176dc8"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.384105 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" event={"ID":"4b1cf44e-4593-4c6c-9a2c-d742840ec711","Type":"ContainerStarted","Data":"a50599f44ec5240520d5a7345d860718ab662d6f34c558304f9d86843166ed9d"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.385660 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" event={"ID":"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff","Type":"ContainerStarted","Data":"bc7db0b8f022386e5cc40123db15adec0fe5917426b1c2f2eed81a7b52368651"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.385939 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.387514 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" event={"ID":"8bb1df7f-1aea-4d75-b905-d87e7d34c27b","Type":"ContainerStarted","Data":"56153dc759066eca8f8801c1c95f941672f0ab76a39f116b2e54aafb66b1e581"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.394476 4903 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-znp46 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.394523 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" podUID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.398631 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" event={"ID":"9f43563c-173f-4276-ac59-02fc755b6585","Type":"ContainerStarted","Data":"b57f0c3f4e9ac56940bf7adede44429d9e93d6a71c4bfa01e1935c4a1834445e"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.398672 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" event={"ID":"9f43563c-173f-4276-ac59-02fc755b6585","Type":"ContainerStarted","Data":"49c65416cad9c207aa297c0bd2540d4fc76cb2ab04eded387489ea5b54d6117b"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.399028 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.404133 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9cpcp" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.405016 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" event={"ID":"2c32e095-4835-4959-88e5-f061f89b5c41","Type":"ContainerStarted","Data":"449f454844e5a1048e19ad5c2758cf2c37983033cb1c69cde6a7aebe3dffb265"} Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.406878 4903 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4vvt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.406928 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" podUID="9f43563c-173f-4276-ac59-02fc755b6585" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.416768 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.417100 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:00.917084907 +0000 UTC m=+153.193056418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.417384 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8v8wj" Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.505252 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.507786 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.519568 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.519851 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.019839016 +0000 UTC m=+153.295810517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.528737 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.584631 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.598854 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.605633 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5ddtc"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.617842 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.622227 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.622412 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.12239556 +0000 UTC m=+153.398367061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.622611 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.622885 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.122876354 +0000 UTC m=+153.398847865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.671548 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dqbbb"] Jan 28 15:48:00 crc kubenswrapper[4903]: W0128 15:48:00.706335 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc63ca4f5_f2b9_4f84_a2e8_1f2323f750f7.slice/crio-85b23569d061480e519219e62b05bf7ddae65c418e7fd1a720b2eb696208d0f4 WatchSource:0}: Error finding container 85b23569d061480e519219e62b05bf7ddae65c418e7fd1a720b2eb696208d0f4: Status 404 returned error can't find the container with id 85b23569d061480e519219e62b05bf7ddae65c418e7fd1a720b2eb696208d0f4 Jan 28 15:48:00 crc kubenswrapper[4903]: W0128 15:48:00.707825 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2489fa1c_af9a_4082_a875_738a1c2fae88.slice/crio-e1f8f8ac837dda2cf632f7ce26ab6ea8ecd17471c5e196cab81be2b2ae0b6d82 WatchSource:0}: Error finding container e1f8f8ac837dda2cf632f7ce26ab6ea8ecd17471c5e196cab81be2b2ae0b6d82: Status 404 returned error can't find the container with id e1f8f8ac837dda2cf632f7ce26ab6ea8ecd17471c5e196cab81be2b2ae0b6d82 Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.723675 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.724007 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.223990357 +0000 UTC m=+153.499961878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.831715 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.832180 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.33216485 +0000 UTC m=+153.608136361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.846519 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fp7dl"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.938997 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:00 crc kubenswrapper[4903]: E0128 15:48:00.939304 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.439288155 +0000 UTC m=+153.715259666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.955226 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9"] Jan 28 15:48:00 crc kubenswrapper[4903]: I0128 15:48:00.960823 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.040134 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.040411 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.540399478 +0000 UTC m=+153.816370989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: W0128 15:48:01.137704 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda35915fe_4b5b_4c69_8abb_2d2d22e423c5.slice/crio-7b07c8f9732fca4477a46b9230c9387110de525bb3f6a17e5a2702ce9d6cb269 WatchSource:0}: Error finding container 7b07c8f9732fca4477a46b9230c9387110de525bb3f6a17e5a2702ce9d6cb269: Status 404 returned error can't find the container with id 7b07c8f9732fca4477a46b9230c9387110de525bb3f6a17e5a2702ce9d6cb269 Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.141737 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.142107 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.642092286 +0000 UTC m=+153.918063797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.168279 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.245744 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.248428 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.248782 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.748766518 +0000 UTC m=+154.024738039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.251041 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fz85j"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.258858 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.323350 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-88mbt"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.329415 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zxr6z"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.351607 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.352271 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.852254909 +0000 UTC m=+154.128226410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.446924 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" event={"ID":"6e2b7db2-b2c4-4975-b84d-4772de0bae9c","Type":"ContainerStarted","Data":"109066ec96e95dfeed935e7d5d741edc0aee217e56a058ad3ccc49b365c6f84b"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.451384 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" podStartSLOduration=132.451365394 podStartE2EDuration="2m12.451365394s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:01.413810317 +0000 UTC m=+153.689781828" watchObservedRunningTime="2026-01-28 15:48:01.451365394 +0000 UTC m=+153.727336905" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.454097 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.454599 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:01.954585216 +0000 UTC m=+154.230556727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.463410 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcmkg" event={"ID":"a1c4af21-1253-4476-8f98-98377ab79e81","Type":"ContainerStarted","Data":"834ad15bcc5f77d3b2af9b49589a84c43c28e1216c1e6f738f89a07f58bf44db"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.463556 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.466965 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.467017 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.474323 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" podStartSLOduration=132.474305487 podStartE2EDuration="2m12.474305487s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:01.473466553 +0000 UTC m=+153.749438074" watchObservedRunningTime="2026-01-28 15:48:01.474305487 +0000 UTC m=+153.750276998" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.477747 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xgnx2" event={"ID":"39430551-2b2f-42ca-a36d-ddfea173a4df","Type":"ContainerStarted","Data":"c22ae1980fadfdf1a4ca6ecca474adf0656508e16639e648a7d3035ee3c698f8"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.483435 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" event={"ID":"be0f6d6d-ffaf-4889-a91d-a2a79d69758a","Type":"ContainerStarted","Data":"47d48aa4767a23c93df58b35cb07eab632a9bcf2f148265f0804cc1e07409357"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.488007 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" event={"ID":"8c895b2d-4baa-40f3-b942-9a64cd93f395","Type":"ContainerStarted","Data":"b2ae3853578200e1cc62a5612bb2f2746f16e65b67f7fdb024a728b0da96ac0c"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.490902 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" event={"ID":"06307fc1-5240-40a9-893d-e302e487fce2","Type":"ContainerStarted","Data":"a473ac30ac3a254af47b1e61dafcdd02e81262d3835933ba2162ae2e45fa5859"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.491668 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" event={"ID":"2489fa1c-af9a-4082-a875-738a1c2fae88","Type":"ContainerStarted","Data":"e1f8f8ac837dda2cf632f7ce26ab6ea8ecd17471c5e196cab81be2b2ae0b6d82"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.492385 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kr4qg" event={"ID":"25dd11d8-a217-40ac-8d11-03b28106776c","Type":"ContainerStarted","Data":"623f302c335fa61b8ab9779eda90a76edf062f20e966b7b3f6a5f975286f32d3"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.498259 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" event={"ID":"440d0fa6-743a-46f6-843a-f3af8e9ec321","Type":"ContainerStarted","Data":"41edfcd2c18c06d03b9ae042790cc12816e15cd620654069d02f113153ece331"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.501293 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" event={"ID":"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7","Type":"ContainerStarted","Data":"85b23569d061480e519219e62b05bf7ddae65c418e7fd1a720b2eb696208d0f4"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.510870 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" event={"ID":"cfd6fe6c-cdb9-4b41-a9f4-e245780116be","Type":"ContainerStarted","Data":"d6eda39366f9b487e2623f4afae30bd13bec52055e132183ce806567faa0adee"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.516581 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.517077 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" event={"ID":"1782c794-6457-46e7-9ddb-547b000c6bf7","Type":"ContainerStarted","Data":"e2707ac7d75f8a76950a6f1fa62a8f7b65dcfbe801d750c943405299fe4586b3"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.519263 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" event={"ID":"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5","Type":"ContainerStarted","Data":"e3195b52eaad4280b4af2e3b96ac915e18a64b37e1a1781b12e5ca10e02065c8"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.523334 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" event={"ID":"e8d4831f-857e-492e-b40a-d2f1a7b38780","Type":"ContainerStarted","Data":"4f681e138edad25f8e7a22dc2d0cfe44aa47c094b36412d7f2a6880b6657d0d5"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.526448 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" event={"ID":"69321e4b-4392-413f-839b-57040cd0a9bb","Type":"ContainerStarted","Data":"0dd2dc873b4bedce523661ced7429e513c97df0591122ad2f49018c6c7c88195"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.528812 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" event={"ID":"94760384-fcfe-4f1e-bd84-aa310251260c","Type":"ContainerStarted","Data":"e28778086b8acc84249fc46b13702e3a66114a5b1c0d4d288136efd7c3e9a01c"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.533582 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" event={"ID":"a35915fe-4b5b-4c69-8abb-2d2d22e423c5","Type":"ContainerStarted","Data":"7b07c8f9732fca4477a46b9230c9387110de525bb3f6a17e5a2702ce9d6cb269"} Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.534825 4903 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-znp46 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.534864 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" podUID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.536844 4903 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4vvt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.536894 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" podUID="9f43563c-173f-4276-ac59-02fc755b6585" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.558670 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.559750 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.059720414 +0000 UTC m=+154.335691925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.629871 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-f5mnt" podStartSLOduration=132.629850886 podStartE2EDuration="2m12.629850886s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:01.592555806 +0000 UTC m=+153.868527317" watchObservedRunningTime="2026-01-28 15:48:01.629850886 +0000 UTC m=+153.905822397" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.660438 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.660881 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.160866797 +0000 UTC m=+154.436838318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.761244 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.761457 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.261426415 +0000 UTC m=+154.537397936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.761725 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.762043 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.262031992 +0000 UTC m=+154.538003503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.796275 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bp7hn"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.825568 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.863119 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.864101 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.364079531 +0000 UTC m=+154.640051042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.882833 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-s8rwr" podStartSLOduration=132.882805603 podStartE2EDuration="2m12.882805603s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:01.881486647 +0000 UTC m=+154.157458168" watchObservedRunningTime="2026-01-28 15:48:01.882805603 +0000 UTC m=+154.158777114" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.926283 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9cpcp"] Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.946349 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tcmkg" podStartSLOduration=132.946327139 podStartE2EDuration="2m12.946327139s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:01.941595504 +0000 UTC m=+154.217567035" watchObservedRunningTime="2026-01-28 15:48:01.946327139 +0000 UTC m=+154.222298650" Jan 28 15:48:01 crc kubenswrapper[4903]: I0128 15:48:01.965454 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:01 crc kubenswrapper[4903]: E0128 15:48:01.966188 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.466171942 +0000 UTC m=+154.742143453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:01.999069 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-jn67q" podStartSLOduration=132.999047576 podStartE2EDuration="2m12.999047576s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:01.983484024 +0000 UTC m=+154.259455545" watchObservedRunningTime="2026-01-28 15:48:01.999047576 +0000 UTC m=+154.275019087" Jan 28 15:48:02 crc kubenswrapper[4903]: W0128 15:48:02.006680 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4da13ff_7bf6_42cf_a5bb_352f453abdb4.slice/crio-0444e32ca0965dc88ebc34227ba69ecf365573b5e2a8d6740637dabf8615c8e9 WatchSource:0}: Error finding container 0444e32ca0965dc88ebc34227ba69ecf365573b5e2a8d6740637dabf8615c8e9: Status 404 returned error can't find the container with id 0444e32ca0965dc88ebc34227ba69ecf365573b5e2a8d6740637dabf8615c8e9 Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.049406 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf"] Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.054281 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj"] Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.067138 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.067476 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.56746152 +0000 UTC m=+154.843433031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.128932 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8v8wj"] Jan 28 15:48:02 crc kubenswrapper[4903]: W0128 15:48:02.146820 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7afbeb7b_ff1e_40bf_903c_64e61eb493d7.slice/crio-9c573d0f2ad221cf1b91a99a98177a772e30acab77a9d56b85b0a249ded438a6 WatchSource:0}: Error finding container 9c573d0f2ad221cf1b91a99a98177a772e30acab77a9d56b85b0a249ded438a6: Status 404 returned error can't find the container with id 9c573d0f2ad221cf1b91a99a98177a772e30acab77a9d56b85b0a249ded438a6 Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.168563 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.168911 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.668895683 +0000 UTC m=+154.944867194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.269595 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.269769 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.769755139 +0000 UTC m=+155.045726650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.269810 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.270067 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.770060798 +0000 UTC m=+155.046032309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.370832 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.370991 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.870969325 +0000 UTC m=+155.146940836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.372019 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.372681 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.872668253 +0000 UTC m=+155.148639974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.473723 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.474690 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:02.974673982 +0000 UTC m=+155.250645493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.561552 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" event={"ID":"fe36423a-6685-4edb-b85f-f6aded8a37a7","Type":"ContainerStarted","Data":"6f9b7e3a4acdf328fed631a23aa9f33989e9a31d0b4144d9183148be7487db4a"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.573156 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" event={"ID":"554d4a29-2a6d-44cf-a4a9-641478e299d9","Type":"ContainerStarted","Data":"e0a743e340a6a559a0e8dc963856795bd50137c2ba9826da7e447d8a4cf1559e"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.585995 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.586506 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.086485399 +0000 UTC m=+155.362456910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.594874 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" event={"ID":"e8d4831f-857e-492e-b40a-d2f1a7b38780","Type":"ContainerStarted","Data":"08ad354ef6c8e58639d2e9a9189290aa3c74a235563b2fa44d33ceae8896c89a"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.606202 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hwxwx" podStartSLOduration=133.606170798 podStartE2EDuration="2m13.606170798s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:02.587963221 +0000 UTC m=+154.863934752" watchObservedRunningTime="2026-01-28 15:48:02.606170798 +0000 UTC m=+154.882142309" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.623608 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9cpcp" event={"ID":"a4da13ff-7bf6-42cf-a5bb-352f453abdb4","Type":"ContainerStarted","Data":"0444e32ca0965dc88ebc34227ba69ecf365573b5e2a8d6740637dabf8615c8e9"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.627394 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" event={"ID":"1459b817-2f82-48c8-8267-bdef187b4df9","Type":"ContainerStarted","Data":"0293d1484d7b9671d2b9e0df0367840a3beb91808d87578450e4e9c9cab28337"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.634885 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" event={"ID":"4b1cf44e-4593-4c6c-9a2c-d742840ec711","Type":"ContainerStarted","Data":"7ecbc1ce36abec30f85fd430231fd0d4c8bfa2e0933bd5ae666c404f77525c72"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.642134 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" event={"ID":"9d22972e-928a-456e-9357-4693bb34d49d","Type":"ContainerStarted","Data":"5fd86ec57890f143dd4c94a2ef3506cb8c7e8ed1e361e52820d320a55f69b4e1"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.642642 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.660943 4903 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mvn4x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.661211 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" podUID="9d22972e-928a-456e-9357-4693bb34d49d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.670861 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" event={"ID":"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e","Type":"ContainerStarted","Data":"9ae3e91041dd7aaa958ccfd14499a83fa2f06e96a53522d7303d6b3d462b777e"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.670913 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" event={"ID":"4a1feaa8-6d8a-44d3-ab2f-22e1571f175e","Type":"ContainerStarted","Data":"ff60717ec9130872a6412a2638e75b17786b6f5c255cdedc57073c284fa833a5"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.673387 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" event={"ID":"69321e4b-4392-413f-839b-57040cd0a9bb","Type":"ContainerStarted","Data":"88d86e43059b49bbf7d75824d29482519142bd31446aab07f214790677137af8"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.676013 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" event={"ID":"8bb1df7f-1aea-4d75-b905-d87e7d34c27b","Type":"ContainerStarted","Data":"c92ed948866a68f02b52fdd35cfa2a65438053a3686353c1b547a2593e746975"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.676876 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" event={"ID":"391b7add-cc22-451b-a87a-8130bb8924cb","Type":"ContainerStarted","Data":"1a087b494e4305f0fa40d156696d3361ff354dbfd1ab1ca37af55c48e1d1f7f5"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.677870 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kr4qg" event={"ID":"25dd11d8-a217-40ac-8d11-03b28106776c","Type":"ContainerStarted","Data":"e0fb33c70cdd53c14479719e5f2c77d0820d113b30181005aa721d09be57d63b"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.678630 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" event={"ID":"bff9e5b8-162e-4335-9801-3419363a16a7","Type":"ContainerStarted","Data":"ab3d35df62ef383c809943b5b14d50eed25e388dda1216fb8cc0e38848a32ee2"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.680014 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" event={"ID":"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba","Type":"ContainerStarted","Data":"72e754403328a3238dbdbbb0001344e1473126a0eb42a404a2987108f98ed0d5"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.681395 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xgnx2" event={"ID":"39430551-2b2f-42ca-a36d-ddfea173a4df","Type":"ContainerStarted","Data":"e49730a4dfa2a87fd1c8df441b32ebd8ff8567379914c3b25f01b62b86b80bf8"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.683092 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" event={"ID":"2489fa1c-af9a-4082-a875-738a1c2fae88","Type":"ContainerStarted","Data":"391978a9f532bbc06d4d21232ca030d076106878cd5c92533015d19b55a53fcf"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.684692 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8v8wj" event={"ID":"729ae87f-e430-460d-a99c-7b65c5e0f71c","Type":"ContainerStarted","Data":"50b4fe6c68f4c91b7564f6e79d74cc6f073dccfc9bb73d099b5fe3a74c4268ac"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.690699 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" podStartSLOduration=133.690679429 podStartE2EDuration="2m13.690679429s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:02.685599495 +0000 UTC m=+154.961571036" watchObservedRunningTime="2026-01-28 15:48:02.690679429 +0000 UTC m=+154.966650940" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.690887 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.691981 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.191960536 +0000 UTC m=+155.467932047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.700401 4903 generic.go:334] "Generic (PLEG): container finished" podID="cfd6fe6c-cdb9-4b41-a9f4-e245780116be" containerID="d6eda39366f9b487e2623f4afae30bd13bec52055e132183ce806567faa0adee" exitCode=0 Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.700605 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" event={"ID":"cfd6fe6c-cdb9-4b41-a9f4-e245780116be","Type":"ContainerDied","Data":"d6eda39366f9b487e2623f4afae30bd13bec52055e132183ce806567faa0adee"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.717884 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" event={"ID":"bf6103ed-279b-4aed-846b-5437d8041540","Type":"ContainerStarted","Data":"c36c95427d9db9f8764165fad946ab40115c8d57232321f0067de55551d0a170"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.739356 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" event={"ID":"df7cb6af-bde0-450e-a092-732c69105881","Type":"ContainerStarted","Data":"3bf21f62f236f5ad87643ecde4fb5750ecb313e3898b0d7a8ca94a60546190a8"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.745266 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" event={"ID":"7afbeb7b-ff1e-40bf-903c-64e61eb493d7","Type":"ContainerStarted","Data":"9c573d0f2ad221cf1b91a99a98177a772e30acab77a9d56b85b0a249ded438a6"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.747382 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" event={"ID":"395779b5-5c6e-45a6-8d06-361b72523703","Type":"ContainerStarted","Data":"7996d4b251b12474236020b86d698765d0d57344f740d6ab6bfca587f490692b"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.752907 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-522t5" event={"ID":"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1","Type":"ContainerStarted","Data":"b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.757028 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" event={"ID":"ab8e1a4a-7e88-4f51-96e3-52f6c6310170","Type":"ContainerStarted","Data":"749c83a92977e397b821d28812e69611ec2f81762b6d07657d57a16f087cd765"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.760345 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" event={"ID":"8c895b2d-4baa-40f3-b942-9a64cd93f395","Type":"ContainerStarted","Data":"522c6ed799640197de60fc41100f8c99a76f0b1c62c41debd1f6da52e4789a62"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.768293 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" event={"ID":"1cbaa640-07e4-402d-80d3-bb4bc85c9ec5","Type":"ContainerStarted","Data":"c948f0bfb1944563ebe7f4b980e762d9dbf085141d7215331014fadb1064b7b9"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.779020 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-522t5" podStartSLOduration=133.778999799 podStartE2EDuration="2m13.778999799s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:02.774933484 +0000 UTC m=+155.050905005" watchObservedRunningTime="2026-01-28 15:48:02.778999799 +0000 UTC m=+155.054971300" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.780410 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" event={"ID":"331fb96f-546c-4218-9f5b-6a358daf2f16","Type":"ContainerStarted","Data":"cf5c2ecc1cdb321828b044a0b63babb8aba799aee4cd42cf31847c7611420760"} Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.781177 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.781216 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.784697 4903 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-t4vvt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.784751 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" podUID="9f43563c-173f-4276-ac59-02fc755b6585" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.796116 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-krlbv" podStartSLOduration=133.79610052499999 podStartE2EDuration="2m13.796100525s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:02.795162528 +0000 UTC m=+155.071134049" watchObservedRunningTime="2026-01-28 15:48:02.796100525 +0000 UTC m=+155.072072036" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.796954 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.799731 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.299708487 +0000 UTC m=+155.575680178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.840244 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-w6pt2" podStartSLOduration=133.840221228 podStartE2EDuration="2m13.840221228s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:02.832043526 +0000 UTC m=+155.108015037" watchObservedRunningTime="2026-01-28 15:48:02.840221228 +0000 UTC m=+155.116192739" Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.899083 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.899276 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.399245205 +0000 UTC m=+155.675216716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:02 crc kubenswrapper[4903]: I0128 15:48:02.899546 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:02 crc kubenswrapper[4903]: E0128 15:48:02.901587 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.401574752 +0000 UTC m=+155.677546323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.000849 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.001490 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.50145658 +0000 UTC m=+155.777428101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.102877 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.103289 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.603274793 +0000 UTC m=+155.879246304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.204505 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.204672 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.704642064 +0000 UTC m=+155.980613585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.204705 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.205072 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.705062225 +0000 UTC m=+155.981033736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.305863 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.306061 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.806032354 +0000 UTC m=+156.082003865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.306803 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.307259 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.807247349 +0000 UTC m=+156.083218860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.407603 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.407875 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:03.907861438 +0000 UTC m=+156.183832949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.509037 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.509397 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.009381962 +0000 UTC m=+156.285353473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.609564 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.609637 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.10961004 +0000 UTC m=+156.385581551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.609850 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.610231 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.110217357 +0000 UTC m=+156.386188858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.712901 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.713104 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.21307431 +0000 UTC m=+156.489045821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.713864 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.714293 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.214274904 +0000 UTC m=+156.490246415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.787339 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" event={"ID":"bf6103ed-279b-4aed-846b-5437d8041540","Type":"ContainerStarted","Data":"fb4877f22f8250192c58ff9f3e496b7351184b5524b3446a4cf20ea50b66da69"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.789540 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" event={"ID":"8bb1df7f-1aea-4d75-b905-d87e7d34c27b","Type":"ContainerStarted","Data":"fe5662e1b796d3e7d81cd3f910c4a34582f4651df967f8d2556e829b4037a2a6"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.791837 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9cpcp" event={"ID":"a4da13ff-7bf6-42cf-a5bb-352f453abdb4","Type":"ContainerStarted","Data":"c9462548f2ca50b5410d61754f04d4bf12386555f14afdb99978ec7898e78454"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.794080 4903 generic.go:334] "Generic (PLEG): container finished" podID="94760384-fcfe-4f1e-bd84-aa310251260c" containerID="ef4f3f18435ba4d55d1280019e55a0b854317ee8b7f8fe3f16e2f60c60228cf3" exitCode=0 Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.794139 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" event={"ID":"94760384-fcfe-4f1e-bd84-aa310251260c","Type":"ContainerDied","Data":"ef4f3f18435ba4d55d1280019e55a0b854317ee8b7f8fe3f16e2f60c60228cf3"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.796017 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" event={"ID":"1782c794-6457-46e7-9ddb-547b000c6bf7","Type":"ContainerStarted","Data":"c2bdb85f2ac3f277be356a3d86aac0fb4823015c17c3160e63b41c4e64ab6d67"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.797348 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" event={"ID":"bff9e5b8-162e-4335-9801-3419363a16a7","Type":"ContainerStarted","Data":"4fe11cea88380e7811a60141619d936ba45267a76dc9544d3ad18d39baaf5b2f"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.797561 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.799187 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" event={"ID":"391b7add-cc22-451b-a87a-8130bb8924cb","Type":"ContainerStarted","Data":"d8084ad351cce3a1f6006c8d90267e8a3714a75e0e207d86b8d34f832206762e"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.799476 4903 patch_prober.go:28] interesting pod/console-operator-58897d9998-zxr6z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.799512 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" podUID="bff9e5b8-162e-4335-9801-3419363a16a7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.801213 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" event={"ID":"554d4a29-2a6d-44cf-a4a9-641478e299d9","Type":"ContainerStarted","Data":"e314ae772bbe6cf9b409f022887b3dd431722f77fe0ca9f22719bdc8c82df147"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.805082 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" event={"ID":"395779b5-5c6e-45a6-8d06-361b72523703","Type":"ContainerStarted","Data":"fcadb4acbc811c3b03bb87c46e9a0fdd14ccd4c2c94e99a9a4979896e20d650c"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.807304 4903 generic.go:334] "Generic (PLEG): container finished" podID="1459b817-2f82-48c8-8267-bdef187b4df9" containerID="0293d1484d7b9671d2b9e0df0367840a3beb91808d87578450e4e9c9cab28337" exitCode=0 Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.807821 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" event={"ID":"1459b817-2f82-48c8-8267-bdef187b4df9","Type":"ContainerDied","Data":"0293d1484d7b9671d2b9e0df0367840a3beb91808d87578450e4e9c9cab28337"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.813366 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" event={"ID":"ab8e1a4a-7e88-4f51-96e3-52f6c6310170","Type":"ContainerStarted","Data":"ba9b5e43aff68b7a89caff9f1d36e13ebd9f85e30a3abb6828fbfc666e04ee8b"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.813612 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.814245 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.814420 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.314393899 +0000 UTC m=+156.590365410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.814889 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.815320 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.315306955 +0000 UTC m=+156.591278466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.815937 4903 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bjxkj container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.815965 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" podUID="ab8e1a4a-7e88-4f51-96e3-52f6c6310170" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.819075 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" event={"ID":"440d0fa6-743a-46f6-843a-f3af8e9ec321","Type":"ContainerStarted","Data":"b08a553ebf84893da820d1e1c79bb13c3d5d9a21010ea49dbcc2b25b78f74c75"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.819660 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.822326 4903 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zlvqj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.822469 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" podUID="440d0fa6-743a-46f6-843a-f3af8e9ec321" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.824827 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9cpcp" podStartSLOduration=6.824806924 podStartE2EDuration="6.824806924s" podCreationTimestamp="2026-01-28 15:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:03.821947833 +0000 UTC m=+156.097919364" watchObservedRunningTime="2026-01-28 15:48:03.824806924 +0000 UTC m=+156.100778435" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.828324 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" event={"ID":"df7cb6af-bde0-450e-a092-732c69105881","Type":"ContainerStarted","Data":"52de81f71367e3a0f31dbfcb4c3631bf4b4aae7c0fac7285b1fe2076658983c3"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.841094 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" event={"ID":"be0f6d6d-ffaf-4889-a91d-a2a79d69758a","Type":"ContainerStarted","Data":"1c82ae5bff552c82cae190673e343f3192afabaf39a5b332fc73398448551c7a"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.842032 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.843458 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" event={"ID":"06307fc1-5240-40a9-893d-e302e487fce2","Type":"ContainerStarted","Data":"f5db76b0db160449c46d8155663eaf8d3e53b0cefffdb266d6a71adf5e2473d6"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.845799 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" event={"ID":"a35915fe-4b5b-4c69-8abb-2d2d22e423c5","Type":"ContainerStarted","Data":"4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.846639 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.847793 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8v8wj" event={"ID":"729ae87f-e430-460d-a99c-7b65c5e0f71c","Type":"ContainerStarted","Data":"2ce49b139f026ab5a3c0dc983299e28b365ce4114b98772d80aee2adf318b86c"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.851080 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" event={"ID":"c63ca4f5-f2b9-4f84-a2e8-1f2323f750f7","Type":"ContainerStarted","Data":"c57d8c7c26d52f5fffbdb833b835db99788e7252eda25ce5e674c68361ed98a0"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.853013 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" event={"ID":"4b1cf44e-4593-4c6c-9a2c-d742840ec711","Type":"ContainerStarted","Data":"66e820066cd28ff98d2241f4b85c8cc5d5c8a5c2f9e45538259065e2523689b3"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.855160 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" event={"ID":"2489fa1c-af9a-4082-a875-738a1c2fae88","Type":"ContainerStarted","Data":"5ee6594a1c265f55c50eaa55ea1e743bbb1b1b6c2d63fb12c4898db467c5ae6c"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.858286 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" event={"ID":"7afbeb7b-ff1e-40bf-903c-64e61eb493d7","Type":"ContainerStarted","Data":"00ed622137b5fe51a31d059aae2fd7183cf9e8fb33c1e5eb46475bb57a337cea"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.858856 4903 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dqbbb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.859207 4903 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fp7dl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.859264 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.859880 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.871015 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" event={"ID":"331fb96f-546c-4218-9f5b-6a358daf2f16","Type":"ContainerStarted","Data":"ddc3d4d22db6468a4e39b7ae0399fcd0ae9aba39132101d753598f8c5e2a8a9b"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.874934 4903 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mvn4x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.874969 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" podUID="9d22972e-928a-456e-9357-4693bb34d49d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.875007 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" event={"ID":"cfd6fe6c-cdb9-4b41-a9f4-e245780116be","Type":"ContainerStarted","Data":"ce20de5ff650e14189a9d8b303b17feb2131de82a8d49c8f4932a1793a296318"} Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.889314 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bp7hn" podStartSLOduration=134.889289187 podStartE2EDuration="2m14.889289187s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:03.885320375 +0000 UTC m=+156.161291886" watchObservedRunningTime="2026-01-28 15:48:03.889289187 +0000 UTC m=+156.165260698" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.904782 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" podStartSLOduration=134.904759687 podStartE2EDuration="2m14.904759687s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:03.90379945 +0000 UTC m=+156.179770971" watchObservedRunningTime="2026-01-28 15:48:03.904759687 +0000 UTC m=+156.180731198" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.915627 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:03 crc kubenswrapper[4903]: E0128 15:48:03.917281 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.417254081 +0000 UTC m=+156.693225592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.927789 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" podStartSLOduration=134.9277655 podStartE2EDuration="2m14.9277655s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:03.926119764 +0000 UTC m=+156.202091275" watchObservedRunningTime="2026-01-28 15:48:03.9277655 +0000 UTC m=+156.203737011" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.940938 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" podStartSLOduration=134.940915844 podStartE2EDuration="2m14.940915844s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:03.940643786 +0000 UTC m=+156.216615307" watchObservedRunningTime="2026-01-28 15:48:03.940915844 +0000 UTC m=+156.216887355" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.964345 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" podStartSLOduration=134.964323519 podStartE2EDuration="2m14.964323519s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:03.963068323 +0000 UTC m=+156.239039834" watchObservedRunningTime="2026-01-28 15:48:03.964323519 +0000 UTC m=+156.240295030" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.964811 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.967801 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.968022 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:48:03 crc kubenswrapper[4903]: I0128 15:48:03.993208 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-88mbt" podStartSLOduration=134.99318938 podStartE2EDuration="2m14.99318938s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:03.992594272 +0000 UTC m=+156.268565793" watchObservedRunningTime="2026-01-28 15:48:03.99318938 +0000 UTC m=+156.269160891" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.018433 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.018835 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.518820058 +0000 UTC m=+156.794791569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.023427 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" podStartSLOduration=135.023407718 podStartE2EDuration="2m15.023407718s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.017448259 +0000 UTC m=+156.293419770" watchObservedRunningTime="2026-01-28 15:48:04.023407718 +0000 UTC m=+156.299379229" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.042740 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kr4qg" podStartSLOduration=135.042721777 podStartE2EDuration="2m15.042721777s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.04106292 +0000 UTC m=+156.317034441" watchObservedRunningTime="2026-01-28 15:48:04.042721777 +0000 UTC m=+156.318693288" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.063046 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4brk9" podStartSLOduration=135.063025774 podStartE2EDuration="2m15.063025774s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.05900435 +0000 UTC m=+156.334975861" watchObservedRunningTime="2026-01-28 15:48:04.063025774 +0000 UTC m=+156.338997285" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.076591 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6xcxh" podStartSLOduration=135.076568948 podStartE2EDuration="2m15.076568948s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.075804647 +0000 UTC m=+156.351776168" watchObservedRunningTime="2026-01-28 15:48:04.076568948 +0000 UTC m=+156.352540469" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.116406 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" podStartSLOduration=135.11638089 podStartE2EDuration="2m15.11638089s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.098401929 +0000 UTC m=+156.374373450" watchObservedRunningTime="2026-01-28 15:48:04.11638089 +0000 UTC m=+156.392352401" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.117225 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xgnx2" podStartSLOduration=7.117216983 podStartE2EDuration="7.117216983s" podCreationTimestamp="2026-01-28 15:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.11495752 +0000 UTC m=+156.390929051" watchObservedRunningTime="2026-01-28 15:48:04.117216983 +0000 UTC m=+156.393188494" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.119129 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.119354 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.619313853 +0000 UTC m=+156.895285364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.119654 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.119981 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.619966371 +0000 UTC m=+156.895937882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.155067 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mnp5j" podStartSLOduration=135.155041508 podStartE2EDuration="2m15.155041508s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.13362274 +0000 UTC m=+156.409594261" watchObservedRunningTime="2026-01-28 15:48:04.155041508 +0000 UTC m=+156.431013019" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.155861 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5ddtc" podStartSLOduration=135.155852321 podStartE2EDuration="2m15.155852321s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.150964872 +0000 UTC m=+156.426936383" watchObservedRunningTime="2026-01-28 15:48:04.155852321 +0000 UTC m=+156.431823832" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.172760 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8s7l4" podStartSLOduration=135.172738781 podStartE2EDuration="2m15.172738781s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.169935542 +0000 UTC m=+156.445907063" watchObservedRunningTime="2026-01-28 15:48:04.172738781 +0000 UTC m=+156.448710292" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.186492 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8r4q" podStartSLOduration=135.186472712 podStartE2EDuration="2m15.186472712s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.183731804 +0000 UTC m=+156.459703325" watchObservedRunningTime="2026-01-28 15:48:04.186472712 +0000 UTC m=+156.462444223" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.230107 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.230382 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.730340248 +0000 UTC m=+157.006311759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.331461 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.331831 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.831817312 +0000 UTC m=+157.107788823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.433921 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.434070 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.934051426 +0000 UTC m=+157.210022937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.434752 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.435308 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:04.935292971 +0000 UTC m=+157.211264482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.535934 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.536043 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.036024243 +0000 UTC m=+157.311995754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.536176 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.536429 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.036420925 +0000 UTC m=+157.312392436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.637379 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.637758 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.137739634 +0000 UTC m=+157.413711145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.739118 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.739414 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.239401783 +0000 UTC m=+157.515373294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.840215 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.840406 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.340380182 +0000 UTC m=+157.616351723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.840459 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.840850 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.340834344 +0000 UTC m=+157.616805875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.880719 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" event={"ID":"331fb96f-546c-4218-9f5b-6a358daf2f16","Type":"ContainerStarted","Data":"e1684f39f857662816f592480acbbaa126167c691b26230ae6acef54175932ab"} Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.882373 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" event={"ID":"1782c794-6457-46e7-9ddb-547b000c6bf7","Type":"ContainerStarted","Data":"4d57dd8e38d784de07460018030675b4b92370dfe6a8616a5e332e8077c9173b"} Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.884819 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" event={"ID":"395779b5-5c6e-45a6-8d06-361b72523703","Type":"ContainerStarted","Data":"c5e8758094556d8d67d88cc765fb444d1db2a04216f0565aee4c05eaceade7f7"} Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.886952 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" event={"ID":"bf6103ed-279b-4aed-846b-5437d8041540","Type":"ContainerStarted","Data":"9a7be7fe6da19dc975639e53d0cb4f4687de06bd9e89c695aeea3a0781decf4c"} Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888483 4903 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zlvqj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888522 4903 patch_prober.go:28] interesting pod/console-operator-58897d9998-zxr6z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888562 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" podUID="440d0fa6-743a-46f6-843a-f3af8e9ec321" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888600 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" podUID="bff9e5b8-162e-4335-9801-3419363a16a7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888522 4903 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dqbbb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888497 4903 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-mvn4x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888659 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888692 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" podUID="9d22972e-928a-456e-9357-4693bb34d49d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888547 4903 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fp7dl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.888740 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.889507 4903 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bjxkj container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.889608 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" podUID="ab8e1a4a-7e88-4f51-96e3-52f6c6310170" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.903174 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6c24v" podStartSLOduration=135.903159366 podStartE2EDuration="2m15.903159366s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.901311323 +0000 UTC m=+157.177282834" watchObservedRunningTime="2026-01-28 15:48:04.903159366 +0000 UTC m=+157.179130877" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.919308 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gmb7b" podStartSLOduration=135.919291674 podStartE2EDuration="2m15.919291674s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:04.918316746 +0000 UTC m=+157.194288257" watchObservedRunningTime="2026-01-28 15:48:04.919291674 +0000 UTC m=+157.195263185" Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.941357 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:04 crc kubenswrapper[4903]: E0128 15:48:04.944545 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.44451389 +0000 UTC m=+157.720485401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.965578 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:48:04 crc kubenswrapper[4903]: I0128 15:48:04.965635 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.043584 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.043875 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.543863233 +0000 UTC m=+157.819834744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.144781 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.144903 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.644884514 +0000 UTC m=+157.920856035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.145173 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.145548 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.645519962 +0000 UTC m=+157.921491473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.245998 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.246177 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.746150032 +0000 UTC m=+158.022121553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.246635 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.246982 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.746967254 +0000 UTC m=+158.022938765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.347582 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.347938 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.847923173 +0000 UTC m=+158.123894684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.449217 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.449571 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:05.949553891 +0000 UTC m=+158.225525402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.550312 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.550643 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.050627653 +0000 UTC m=+158.326599164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.652016 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.652365 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.152352804 +0000 UTC m=+158.428324315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.753347 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.753514 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.253480647 +0000 UTC m=+158.529452158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.753626 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.753903 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.253891698 +0000 UTC m=+158.529863209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.854848 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.354826666 +0000 UTC m=+158.630798177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.854941 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.855214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.855497 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.355487596 +0000 UTC m=+158.631459107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.893448 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" event={"ID":"1459b817-2f82-48c8-8267-bdef187b4df9","Type":"ContainerStarted","Data":"d75c3943ac826a6e534c8358d5a9b24d8c90ef4f69a8c45adccbd684dcf9bd8d"} Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.895067 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8v8wj" event={"ID":"729ae87f-e430-460d-a99c-7b65c5e0f71c","Type":"ContainerStarted","Data":"099ca59934f0abc2d95645e9cb74e37a4ee9af960a304464eb92097a135aee8b"} Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.897034 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" event={"ID":"cfd6fe6c-cdb9-4b41-a9f4-e245780116be","Type":"ContainerStarted","Data":"183a4d2929e248ad6620303bef1fa444570c30795a9b7d3fe87b3cdaba4f1a35"} Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.898583 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" event={"ID":"94760384-fcfe-4f1e-bd84-aa310251260c","Type":"ContainerStarted","Data":"61fa0dcb07abe477c4fe9cc14b129e2070caa7a48bbde97fb31376ba3ff6791b"} Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.900985 4903 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fp7dl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.901035 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.901471 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" event={"ID":"7afbeb7b-ff1e-40bf-903c-64e61eb493d7","Type":"ContainerStarted","Data":"d8e98c6302e3a32cf11337f751bdae3d0325cf38917cac3ae57b4c3c73912e94"} Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.901504 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.902005 4903 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dqbbb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.902040 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.917107 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-x5qnz" podStartSLOduration=136.917088156 podStartE2EDuration="2m16.917088156s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:05.917060425 +0000 UTC m=+158.193031936" watchObservedRunningTime="2026-01-28 15:48:05.917088156 +0000 UTC m=+158.193059667" Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.971811 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.971864 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.972364 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:05 crc kubenswrapper[4903]: E0128 15:48:05.973567 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.47355459 +0000 UTC m=+158.749526101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:05 crc kubenswrapper[4903]: I0128 15:48:05.975339 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-tpxrl" podStartSLOduration=136.97531581 podStartE2EDuration="2m16.97531581s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:05.971314187 +0000 UTC m=+158.247285698" watchObservedRunningTime="2026-01-28 15:48:05.97531581 +0000 UTC m=+158.251287341" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.010750 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" podStartSLOduration=137.010734817 podStartE2EDuration="2m17.010734817s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:05.990610795 +0000 UTC m=+158.266582306" watchObservedRunningTime="2026-01-28 15:48:06.010734817 +0000 UTC m=+158.286706318" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.037697 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6lfp8" podStartSLOduration=137.037682232 podStartE2EDuration="2m17.037682232s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:06.013061803 +0000 UTC m=+158.289033314" watchObservedRunningTime="2026-01-28 15:48:06.037682232 +0000 UTC m=+158.313653743" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.039795 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-l2wzb" podStartSLOduration=137.039787552 podStartE2EDuration="2m17.039787552s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:06.037252311 +0000 UTC m=+158.313223822" watchObservedRunningTime="2026-01-28 15:48:06.039787552 +0000 UTC m=+158.315759063" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.059364 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4m49s" podStartSLOduration=137.059344618 podStartE2EDuration="2m17.059344618s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:06.058403321 +0000 UTC m=+158.334374832" watchObservedRunningTime="2026-01-28 15:48:06.059344618 +0000 UTC m=+158.335316129" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.073906 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.074278 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.574262582 +0000 UTC m=+158.850234093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.175055 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.175512 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.675489348 +0000 UTC m=+158.951460849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.276275 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.276690 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.776675984 +0000 UTC m=+159.052647505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.377338 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.377606 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.87756808 +0000 UTC m=+159.153539601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.377706 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.378052 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.878037273 +0000 UTC m=+159.154008784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.478494 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.478795 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.978769106 +0000 UTC m=+159.254740617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.478903 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.479204 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:06.979192837 +0000 UTC m=+159.255164348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.580459 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.581045 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.081009071 +0000 UTC m=+159.356980602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.682885 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.182870415 +0000 UTC m=+159.458841926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.682512 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.784415 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.784636 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.284607785 +0000 UTC m=+159.560579296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.784804 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.785175 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.285165522 +0000 UTC m=+159.561137033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.886308 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.886521 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.386489391 +0000 UTC m=+159.662460902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.886664 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.887002 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.386994045 +0000 UTC m=+159.662965556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.905980 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.963391 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:06 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:06 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:06 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.963437 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.987701 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.987862 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.487833881 +0000 UTC m=+159.763805402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:06 crc kubenswrapper[4903]: I0128 15:48:06.987987 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:06 crc kubenswrapper[4903]: E0128 15:48:06.988389 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.488377846 +0000 UTC m=+159.764349357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.088989 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.089159 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.589128909 +0000 UTC m=+159.865100420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.089294 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.089862 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.589847599 +0000 UTC m=+159.865819110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.190481 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.190684 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.690653103 +0000 UTC m=+159.966624624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.190748 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.191122 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.691106657 +0000 UTC m=+159.967078228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.292548 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.292780 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.792743474 +0000 UTC m=+160.068714985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.292832 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.293196 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.793185987 +0000 UTC m=+160.069157498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.393754 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.393963 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.89393442 +0000 UTC m=+160.169905951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.394173 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.394522 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.894509296 +0000 UTC m=+160.170480807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.494983 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.495201 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.995171296 +0000 UTC m=+160.271142807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.495320 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.495652 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:07.995643319 +0000 UTC m=+160.271614830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.596356 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.596564 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.096517605 +0000 UTC m=+160.372489116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.596653 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.596975 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.096966619 +0000 UTC m=+160.372938120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.697761 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.697964 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.197933297 +0000 UTC m=+160.473904808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.698035 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.698304 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.198293617 +0000 UTC m=+160.474265128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.790967 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" podStartSLOduration=138.79095173 podStartE2EDuration="2m18.79095173s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:06.923604566 +0000 UTC m=+159.199576077" watchObservedRunningTime="2026-01-28 15:48:07.79095173 +0000 UTC m=+160.066923241" Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.792069 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.792673 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.797589 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.797690 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.798907 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.799041 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.2990236 +0000 UTC m=+160.574995111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.799069 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.799357 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.299349879 +0000 UTC m=+160.575321390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.809676 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.900253 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.900466 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.40042042 +0000 UTC m=+160.676391931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.900522 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.900576 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.900696 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:07 crc kubenswrapper[4903]: E0128 15:48:07.900878 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.400855793 +0000 UTC m=+160.676827304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.947982 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8v8wj" podStartSLOduration=10.947965160999999 podStartE2EDuration="10.947965161s" podCreationTimestamp="2026-01-28 15:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:07.946643064 +0000 UTC m=+160.222614575" watchObservedRunningTime="2026-01-28 15:48:07.947965161 +0000 UTC m=+160.223936672" Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.963198 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:07 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:07 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:07 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:07 crc kubenswrapper[4903]: I0128 15:48:07.963282 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.001457 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.001645 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.001699 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.501666958 +0000 UTC m=+160.777638469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.001857 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.001898 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.001981 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.002357 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.502345577 +0000 UTC m=+160.778317088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.040759 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.103266 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.103635 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.603619304 +0000 UTC m=+160.879590815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.106276 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.171447 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.172168 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.177027 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.177062 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.191673 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.204885 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.205236 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.705224002 +0000 UTC m=+160.981195503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.309954 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.310521 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91781a43-4d21-41b3-9562-4a90bcb20061-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.310590 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91781a43-4d21-41b3-9562-4a90bcb20061-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.310689 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.810675048 +0000 UTC m=+161.086646559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.411734 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91781a43-4d21-41b3-9562-4a90bcb20061-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.411801 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91781a43-4d21-41b3-9562-4a90bcb20061-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.411827 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.411891 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91781a43-4d21-41b3-9562-4a90bcb20061-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.412110 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:08.9120985 +0000 UTC m=+161.188070011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.428058 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.449310 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91781a43-4d21-41b3-9562-4a90bcb20061-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.488386 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.514186 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.514461 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.014443208 +0000 UTC m=+161.290414719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.514643 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.514945 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.014936002 +0000 UTC m=+161.290907523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.615325 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.615512 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.115486739 +0000 UTC m=+161.391458250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.615698 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.615984 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.115976203 +0000 UTC m=+161.391947714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.710682 4903 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pvtfk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.710729 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" podUID="1459b817-2f82-48c8-8267-bdef187b4df9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.710685 4903 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pvtfk container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.710812 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" podUID="1459b817-2f82-48c8-8267-bdef187b4df9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.716764 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.716975 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.216947011 +0000 UTC m=+161.492918522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.717112 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.717354 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.217347103 +0000 UTC m=+161.493318614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.762935 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:48:08 crc kubenswrapper[4903]: W0128 15:48:08.787565 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod91781a43_4d21_41b3_9562_4a90bcb20061.slice/crio-048caedb1a36a91e04f5c3c515646f7f60bcdf11e69881f03b6a2fbe64826ddc WatchSource:0}: Error finding container 048caedb1a36a91e04f5c3c515646f7f60bcdf11e69881f03b6a2fbe64826ddc: Status 404 returned error can't find the container with id 048caedb1a36a91e04f5c3c515646f7f60bcdf11e69881f03b6a2fbe64826ddc Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.818369 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.818841 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.318820746 +0000 UTC m=+161.594792257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.891382 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.921913 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:08 crc kubenswrapper[4903]: E0128 15:48:08.922315 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.422296557 +0000 UTC m=+161.698268068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.929714 4903 generic.go:334] "Generic (PLEG): container finished" podID="391b7add-cc22-451b-a87a-8130bb8924cb" containerID="d8084ad351cce3a1f6006c8d90267e8a3714a75e0e207d86b8d34f832206762e" exitCode=0 Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.929794 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" event={"ID":"391b7add-cc22-451b-a87a-8130bb8924cb","Type":"ContainerDied","Data":"d8084ad351cce3a1f6006c8d90267e8a3714a75e0e207d86b8d34f832206762e"} Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.930706 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"387c62db-71dd-41d1-8f2d-06ed22d49a3a","Type":"ContainerStarted","Data":"500fdac09a7d1baf4ea11fab46ad2c3b9cceece088f0be4841215c73ee327c1d"} Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.931666 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"91781a43-4d21-41b3-9562-4a90bcb20061","Type":"ContainerStarted","Data":"048caedb1a36a91e04f5c3c515646f7f60bcdf11e69881f03b6a2fbe64826ddc"} Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.949476 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.963947 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:08 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:08 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:08 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.963996 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:08 crc kubenswrapper[4903]: I0128 15:48:08.978398 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" podStartSLOduration=139.97838128 podStartE2EDuration="2m19.97838128s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:08.977225887 +0000 UTC m=+161.253197398" watchObservedRunningTime="2026-01-28 15:48:08.97838128 +0000 UTC m=+161.254352791" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.023109 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.023276 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.523245945 +0000 UTC m=+161.799217456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.023583 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.025710 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.525696785 +0000 UTC m=+161.801668296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.125290 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.125576 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.625552652 +0000 UTC m=+161.901524163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.125667 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.126029 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.626022146 +0000 UTC m=+161.901993657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.226506 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.226869 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.726853091 +0000 UTC m=+162.002824602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.328395 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.328735 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.828721924 +0000 UTC m=+162.104693435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.418811 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8v8wj" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.429722 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.429924 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.929875579 +0000 UTC m=+162.205847090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.430166 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.430447 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:09.930434995 +0000 UTC m=+162.206406506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.531578 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.531745 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.031721692 +0000 UTC m=+162.307693203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.531843 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.532226 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.032214997 +0000 UTC m=+162.308186508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.589814 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.589873 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.591044 4903 patch_prober.go:28] interesting pod/apiserver-76f77b778f-48dgn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.591088 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" podUID="cfd6fe6c-cdb9-4b41-a9f4-e245780116be" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.633725 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.634134 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.134114793 +0000 UTC m=+162.410086314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.635747 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.635801 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.636078 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.636126 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.733106 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.734423 4903 patch_prober.go:28] interesting pod/console-f9d7485db-522t5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.734475 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-522t5" podUID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.734758 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.734988 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.735393 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.23537695 +0000 UTC m=+162.511348461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.765162 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mvn4x" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.782592 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" podStartSLOduration=140.782576051 podStartE2EDuration="2m20.782576051s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:09.034163365 +0000 UTC m=+161.310134876" watchObservedRunningTime="2026-01-28 15:48:09.782576051 +0000 UTC m=+162.058547562" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.835890 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.836104 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.336079851 +0000 UTC m=+162.612051362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.836258 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.837508 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.337500552 +0000 UTC m=+162.613472063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.885198 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.885435 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.887394 4903 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-279w4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.26:8443/livez\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.887433 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" podUID="94760384-fcfe-4f1e-bd84-aa310251260c" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.26:8443/livez\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.937642 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.937829 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.437786501 +0000 UTC m=+162.713758012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.937981 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:09 crc kubenswrapper[4903]: E0128 15:48:09.938458 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.438420529 +0000 UTC m=+162.714392040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.938877 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" event={"ID":"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba","Type":"ContainerStarted","Data":"04dd9df874ed3031137807e4a1ae9c7ad3da74ce529b56df06dabb31aab922fe"} Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.960388 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.983468 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:09 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:09 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:09 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:09 crc kubenswrapper[4903]: I0128 15:48:09.983567 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.005748 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.025881 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlvqj" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.028040 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.039730 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.040968 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.540949272 +0000 UTC m=+162.816920783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.152180 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.153821 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.653807059 +0000 UTC m=+162.929778570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.211549 4903 patch_prober.go:28] interesting pod/console-operator-58897d9998-zxr6z container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.211905 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" podUID="bff9e5b8-162e-4335-9801-3419363a16a7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.211584 4903 patch_prober.go:28] interesting pod/console-operator-58897d9998-zxr6z container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.211960 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" podUID="bff9e5b8-162e-4335-9801-3419363a16a7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.253055 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.253637 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.753623075 +0000 UTC m=+163.029594586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.278091 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bjxkj" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.284313 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.352380 4903 csr.go:261] certificate signing request csr-rcxfl is approved, waiting to be issued Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.355173 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.356364 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.856353554 +0000 UTC m=+163.132325065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.361228 4903 csr.go:257] certificate signing request csr-rcxfl is issued Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.456500 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/391b7add-cc22-451b-a87a-8130bb8924cb-secret-volume\") pod \"391b7add-cc22-451b-a87a-8130bb8924cb\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.456791 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/391b7add-cc22-451b-a87a-8130bb8924cb-config-volume\") pod \"391b7add-cc22-451b-a87a-8130bb8924cb\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.456930 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.457033 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcm67\" (UniqueName: \"kubernetes.io/projected/391b7add-cc22-451b-a87a-8130bb8924cb-kube-api-access-vcm67\") pod \"391b7add-cc22-451b-a87a-8130bb8924cb\" (UID: \"391b7add-cc22-451b-a87a-8130bb8924cb\") " Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.457688 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:10.957669663 +0000 UTC m=+163.233641184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.457899 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/391b7add-cc22-451b-a87a-8130bb8924cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "391b7add-cc22-451b-a87a-8130bb8924cb" (UID: "391b7add-cc22-451b-a87a-8130bb8924cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.465479 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/391b7add-cc22-451b-a87a-8130bb8924cb-kube-api-access-vcm67" (OuterVolumeSpecName: "kube-api-access-vcm67") pod "391b7add-cc22-451b-a87a-8130bb8924cb" (UID: "391b7add-cc22-451b-a87a-8130bb8924cb"). InnerVolumeSpecName "kube-api-access-vcm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.476008 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/391b7add-cc22-451b-a87a-8130bb8924cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "391b7add-cc22-451b-a87a-8130bb8924cb" (UID: "391b7add-cc22-451b-a87a-8130bb8924cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.559145 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.559217 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/391b7add-cc22-451b-a87a-8130bb8924cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.559234 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcm67\" (UniqueName: \"kubernetes.io/projected/391b7add-cc22-451b-a87a-8130bb8924cb-kube-api-access-vcm67\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.559246 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/391b7add-cc22-451b-a87a-8130bb8924cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.559742 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.059727493 +0000 UTC m=+163.335699004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.660811 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.661245 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.161227967 +0000 UTC m=+163.437199478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.763045 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.763820 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.263790221 +0000 UTC m=+163.539761782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.864777 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.865129 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.365114521 +0000 UTC m=+163.641086032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.946649 4903 generic.go:334] "Generic (PLEG): container finished" podID="91781a43-4d21-41b3-9562-4a90bcb20061" containerID="0a5623d3ed3ac5e402e56288926413606c14daced5d85d9fc23cf6edfe06c89a" exitCode=0 Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.946704 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"91781a43-4d21-41b3-9562-4a90bcb20061","Type":"ContainerDied","Data":"0a5623d3ed3ac5e402e56288926413606c14daced5d85d9fc23cf6edfe06c89a"} Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.948214 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" event={"ID":"391b7add-cc22-451b-a87a-8130bb8924cb","Type":"ContainerDied","Data":"1a087b494e4305f0fa40d156696d3361ff354dbfd1ab1ca37af55c48e1d1f7f5"} Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.948247 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a087b494e4305f0fa40d156696d3361ff354dbfd1ab1ca37af55c48e1d1f7f5" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.948418 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.955298 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"387c62db-71dd-41d1-8f2d-06ed22d49a3a","Type":"ContainerStarted","Data":"fda5b14e3ea1764c485c4fc921962c888c9a7097ab95d8db430b97cc735b28dd"} Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.963869 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:10 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:10 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:10 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.964249 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:10 crc kubenswrapper[4903]: I0128 15:48:10.966728 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:10 crc kubenswrapper[4903]: E0128 15:48:10.967057 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.467043447 +0000 UTC m=+163.743014958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.068230 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.069293 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.569277372 +0000 UTC m=+163.845248883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.170053 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.170624 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.670609991 +0000 UTC m=+163.946581502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.211571 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.211524813 podStartE2EDuration="4.211524813s" podCreationTimestamp="2026-01-28 15:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:11.018141479 +0000 UTC m=+163.294113020" watchObservedRunningTime="2026-01-28 15:48:11.211524813 +0000 UTC m=+163.487496334" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.215053 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gw84l"] Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.215499 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="391b7add-cc22-451b-a87a-8130bb8924cb" containerName="collect-profiles" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.215622 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="391b7add-cc22-451b-a87a-8130bb8924cb" containerName="collect-profiles" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.215862 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="391b7add-cc22-451b-a87a-8130bb8924cb" containerName="collect-profiles" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.216973 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.219301 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.225268 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw84l"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.271704 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.271928 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.771897179 +0000 UTC m=+164.047868690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.273251 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.273662 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.773647008 +0000 UTC m=+164.049618519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.362812 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 15:43:10 +0000 UTC, rotation deadline is 2026-10-14 02:11:41.343160908 +0000 UTC Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.363054 4903 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6202h23m29.980110789s for next certificate rotation Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.374457 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.374619 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.874592007 +0000 UTC m=+164.150563518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.374683 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvm5\" (UniqueName: \"kubernetes.io/projected/97923485-cab3-4578-ae02-4489827d63ae-kube-api-access-rqvm5\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.374715 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-utilities\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.374758 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-catalog-content\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.374858 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.375154 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.875139072 +0000 UTC m=+164.151110583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.407774 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-954mb"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.408885 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.411708 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.425277 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-954mb"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.476431 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.476643 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-catalog-content\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.476721 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqvm5\" (UniqueName: \"kubernetes.io/projected/97923485-cab3-4578-ae02-4489827d63ae-kube-api-access-rqvm5\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.476742 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-utilities\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.477110 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-utilities\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.477185 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:11.977168382 +0000 UTC m=+164.253139893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.477381 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-catalog-content\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.514518 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqvm5\" (UniqueName: \"kubernetes.io/projected/97923485-cab3-4578-ae02-4489827d63ae-kube-api-access-rqvm5\") pod \"certified-operators-gw84l\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.532198 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.577656 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mjh\" (UniqueName: \"kubernetes.io/projected/8bd3dd6e-5429-4193-8531-6ba1b357358f-kube-api-access-b9mjh\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.578242 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.578375 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-catalog-content\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.578458 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-utilities\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.578570 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.580101 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.080079265 +0000 UTC m=+164.356050776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.608367 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/90b23d2e-fec0-494c-9a60-461cc16fe0ae-metrics-certs\") pod \"network-metrics-daemon-kq2bn\" (UID: \"90b23d2e-fec0-494c-9a60-461cc16fe0ae\") " pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.611767 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vjpx"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.618308 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.630472 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vjpx"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.639911 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kq2bn" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.679572 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.679852 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-catalog-content\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.679885 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-utilities\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.679963 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9mjh\" (UniqueName: \"kubernetes.io/projected/8bd3dd6e-5429-4193-8531-6ba1b357358f-kube-api-access-b9mjh\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.680090 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.180056977 +0000 UTC m=+164.456028498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.680341 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-catalog-content\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.680428 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-utilities\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.712521 4903 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pvtfk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.712607 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" podUID="1459b817-2f82-48c8-8267-bdef187b4df9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.714349 4903 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-pvtfk container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.714408 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" podUID="1459b817-2f82-48c8-8267-bdef187b4df9" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.715259 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9mjh\" (UniqueName: \"kubernetes.io/projected/8bd3dd6e-5429-4193-8531-6ba1b357358f-kube-api-access-b9mjh\") pod \"community-operators-954mb\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.724393 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.782299 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvtx\" (UniqueName: \"kubernetes.io/projected/202a5ad3-47a0-47cb-89fe-c01d2356e38f-kube-api-access-6lvtx\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.782352 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.782394 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-catalog-content\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.782437 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-utilities\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.782728 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.282716233 +0000 UTC m=+164.558687744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.817401 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gbsl"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.818833 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.832595 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gbsl"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.883792 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.884021 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.38396966 +0000 UTC m=+164.659941171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.884093 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-utilities\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.884361 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lvtx\" (UniqueName: \"kubernetes.io/projected/202a5ad3-47a0-47cb-89fe-c01d2356e38f-kube-api-access-6lvtx\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.884892 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-utilities\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.885517 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.885693 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-catalog-content\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.885785 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.385770492 +0000 UTC m=+164.661742073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.887042 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-catalog-content\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.913895 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw84l"] Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.913928 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lvtx\" (UniqueName: \"kubernetes.io/projected/202a5ad3-47a0-47cb-89fe-c01d2356e38f-kube-api-access-6lvtx\") pod \"certified-operators-6vjpx\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.963232 4903 generic.go:334] "Generic (PLEG): container finished" podID="387c62db-71dd-41d1-8f2d-06ed22d49a3a" containerID="fda5b14e3ea1764c485c4fc921962c888c9a7097ab95d8db430b97cc735b28dd" exitCode=0 Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.963326 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"387c62db-71dd-41d1-8f2d-06ed22d49a3a","Type":"ContainerDied","Data":"fda5b14e3ea1764c485c4fc921962c888c9a7097ab95d8db430b97cc735b28dd"} Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.965120 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:11 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:11 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:11 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.965168 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.965813 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerStarted","Data":"165de5b92a810e954ad0db435277356bbb9f0756a38b2c835778fd89b9b4fb80"} Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.990140 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.990464 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6lj2\" (UniqueName: \"kubernetes.io/projected/1c684124-cb30-4db3-9ece-fd4baa23a639-kube-api-access-x6lj2\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.990497 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-catalog-content\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:11 crc kubenswrapper[4903]: I0128 15:48:11.990596 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-utilities\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:11 crc kubenswrapper[4903]: E0128 15:48:11.990707 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.490691253 +0000 UTC m=+164.766662764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.002295 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.024362 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kq2bn"] Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.092054 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.092117 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-utilities\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.092144 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6lj2\" (UniqueName: \"kubernetes.io/projected/1c684124-cb30-4db3-9ece-fd4baa23a639-kube-api-access-x6lj2\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.092167 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-catalog-content\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.092450 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.592437774 +0000 UTC m=+164.868409285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.092841 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-catalog-content\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.092891 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-utilities\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.114045 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-954mb"] Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.131903 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6lj2\" (UniqueName: \"kubernetes.io/projected/1c684124-cb30-4db3-9ece-fd4baa23a639-kube-api-access-x6lj2\") pod \"community-operators-4gbsl\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.173272 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.193835 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.194052 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.69399652 +0000 UTC m=+164.969968031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.194292 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.194953 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.694940136 +0000 UTC m=+164.970911647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.280191 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vjpx"] Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.295074 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.295462 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.795443282 +0000 UTC m=+165.071414793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.305013 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.396940 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91781a43-4d21-41b3-9562-4a90bcb20061-kubelet-dir\") pod \"91781a43-4d21-41b3-9562-4a90bcb20061\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.397045 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91781a43-4d21-41b3-9562-4a90bcb20061-kube-api-access\") pod \"91781a43-4d21-41b3-9562-4a90bcb20061\" (UID: \"91781a43-4d21-41b3-9562-4a90bcb20061\") " Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.397234 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.397586 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.897571984 +0000 UTC m=+165.173543495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.397754 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91781a43-4d21-41b3-9562-4a90bcb20061-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "91781a43-4d21-41b3-9562-4a90bcb20061" (UID: "91781a43-4d21-41b3-9562-4a90bcb20061"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.403315 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91781a43-4d21-41b3-9562-4a90bcb20061-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "91781a43-4d21-41b3-9562-4a90bcb20061" (UID: "91781a43-4d21-41b3-9562-4a90bcb20061"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.466874 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gbsl"] Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.498186 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.498442 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.99842619 +0000 UTC m=+165.274397701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.498675 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.498714 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91781a43-4d21-41b3-9562-4a90bcb20061-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.498732 4903 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91781a43-4d21-41b3-9562-4a90bcb20061-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.498930 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:12.998923895 +0000 UTC m=+165.274895406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.599928 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.600132 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.100102189 +0000 UTC m=+165.376073710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.600365 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.600654 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.100640544 +0000 UTC m=+165.376612055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.701855 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.702010 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.201991324 +0000 UTC m=+165.477962825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.702062 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.702362 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.202353904 +0000 UTC m=+165.478325415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.802881 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.803044 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.303012585 +0000 UTC m=+165.578984116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.803098 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.803427 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.303414946 +0000 UTC m=+165.579386447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.904155 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.904292 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.404262371 +0000 UTC m=+165.680233882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.904444 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:12 crc kubenswrapper[4903]: E0128 15:48:12.904748 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.404737274 +0000 UTC m=+165.680708785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.964839 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:12 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:12 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:12 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.964948 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.974409 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-954mb" event={"ID":"8bd3dd6e-5429-4193-8531-6ba1b357358f","Type":"ContainerStarted","Data":"56395b3a79547182ee46a74c0c4d8b41376c63666ca8d4836c0a63df7f7ce775"} Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.978202 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbsl" event={"ID":"1c684124-cb30-4db3-9ece-fd4baa23a639","Type":"ContainerStarted","Data":"4d531ac4a50bb1d02c6568f5e70a6b9e5485c889f66ab117076d21b334ecc9f5"} Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.981700 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" event={"ID":"90b23d2e-fec0-494c-9a60-461cc16fe0ae","Type":"ContainerStarted","Data":"1afae24e2134331e9db0add239f4b97592a78812a86fead7911de8cbb2acd5cd"} Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.986345 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerStarted","Data":"23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013"} Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.990023 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vjpx" event={"ID":"202a5ad3-47a0-47cb-89fe-c01d2356e38f","Type":"ContainerStarted","Data":"b63fb20832975d1a296a8f06f2619484357ab590d7fa2d35070b2164969e4b20"} Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.992323 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"91781a43-4d21-41b3-9562-4a90bcb20061","Type":"ContainerDied","Data":"048caedb1a36a91e04f5c3c515646f7f60bcdf11e69881f03b6a2fbe64826ddc"} Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.992418 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048caedb1a36a91e04f5c3c515646f7f60bcdf11e69881f03b6a2fbe64826ddc" Jan 28 15:48:12 crc kubenswrapper[4903]: I0128 15:48:12.992638 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.005036 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.005176 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.505157778 +0000 UTC m=+165.781129289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.005286 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.005637 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.505622161 +0000 UTC m=+165.781593672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.106764 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.107327 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.60729524 +0000 UTC m=+165.883266751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.107500 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.108107 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.608056012 +0000 UTC m=+165.884027523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.209043 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.209253 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.709224867 +0000 UTC m=+165.985196378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.210201 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.210794 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.710772411 +0000 UTC m=+165.986743922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.247610 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.311725 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.312050 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.812034398 +0000 UTC m=+166.088005909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.407502 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j87z2"] Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.407915 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91781a43-4d21-41b3-9562-4a90bcb20061" containerName="pruner" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.407929 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="91781a43-4d21-41b3-9562-4a90bcb20061" containerName="pruner" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.407940 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387c62db-71dd-41d1-8f2d-06ed22d49a3a" containerName="pruner" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.407968 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="387c62db-71dd-41d1-8f2d-06ed22d49a3a" containerName="pruner" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.408087 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="387c62db-71dd-41d1-8f2d-06ed22d49a3a" containerName="pruner" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.408111 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="91781a43-4d21-41b3-9562-4a90bcb20061" containerName="pruner" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.408790 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.410943 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.413989 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kubelet-dir\") pod \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.414154 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kube-api-access\") pod \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\" (UID: \"387c62db-71dd-41d1-8f2d-06ed22d49a3a\") " Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.414064 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "387c62db-71dd-41d1-8f2d-06ed22d49a3a" (UID: "387c62db-71dd-41d1-8f2d-06ed22d49a3a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.415741 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.415865 4903 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.416062 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:13.916046004 +0000 UTC m=+166.192017515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.421516 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "387c62db-71dd-41d1-8f2d-06ed22d49a3a" (UID: "387c62db-71dd-41d1-8f2d-06ed22d49a3a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.422410 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87z2"] Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.517811 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.518015 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.017984179 +0000 UTC m=+166.293955690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.518066 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-utilities\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.518247 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-catalog-content\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.518311 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.518370 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjdhs\" (UniqueName: \"kubernetes.io/projected/6f6c4494-66ec-40c7-960f-0ab4558af7d8-kube-api-access-mjdhs\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.518514 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/387c62db-71dd-41d1-8f2d-06ed22d49a3a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.518647 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.018633968 +0000 UTC m=+166.294605589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.619895 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.620066 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.120029579 +0000 UTC m=+166.396001110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.620138 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-utilities\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.620382 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-catalog-content\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.620447 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.620502 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjdhs\" (UniqueName: \"kubernetes.io/projected/6f6c4494-66ec-40c7-960f-0ab4558af7d8-kube-api-access-mjdhs\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.620602 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-utilities\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.621126 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.121079039 +0000 UTC m=+166.397050650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.622021 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-catalog-content\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.638445 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjdhs\" (UniqueName: \"kubernetes.io/projected/6f6c4494-66ec-40c7-960f-0ab4558af7d8-kube-api-access-mjdhs\") pod \"redhat-marketplace-j87z2\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.721172 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.721567 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.221549074 +0000 UTC m=+166.497520585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.729969 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.816505 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b7tqp"] Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.817451 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.822359 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.822753 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.322736199 +0000 UTC m=+166.598707710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.836302 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7tqp"] Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.923676 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.923919 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpq5\" (UniqueName: \"kubernetes.io/projected/20de9098-3be6-464b-b749-c2836ac0a896-kube-api-access-zzpq5\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.924048 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-utilities\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.924089 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-catalog-content\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:13 crc kubenswrapper[4903]: E0128 15:48:13.924204 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.424186392 +0000 UTC m=+166.700157903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.985413 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:13 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:13 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:13 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:13 crc kubenswrapper[4903]: I0128 15:48:13.985499 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.007211 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.007222 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"387c62db-71dd-41d1-8f2d-06ed22d49a3a","Type":"ContainerDied","Data":"500fdac09a7d1baf4ea11fab46ad2c3b9cceece088f0be4841215c73ee327c1d"} Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.007263 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="500fdac09a7d1baf4ea11fab46ad2c3b9cceece088f0be4841215c73ee327c1d" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.007379 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87z2"] Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.010220 4903 generic.go:334] "Generic (PLEG): container finished" podID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerID="bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a" exitCode=0 Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.010273 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbsl" event={"ID":"1c684124-cb30-4db3-9ece-fd4baa23a639","Type":"ContainerDied","Data":"bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a"} Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.012406 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" event={"ID":"90b23d2e-fec0-494c-9a60-461cc16fe0ae","Type":"ContainerStarted","Data":"cee7a45fe1e550a6154308b876ee6e61d8e7a5b7163b993d520ddc4fd5d0f657"} Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.014035 4903 generic.go:334] "Generic (PLEG): container finished" podID="97923485-cab3-4578-ae02-4489827d63ae" containerID="23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013" exitCode=0 Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.014106 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerDied","Data":"23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013"} Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.017303 4903 generic.go:334] "Generic (PLEG): container finished" podID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerID="a21ec688c1403fe282d5e864903bb711ed9179ffd77dfdaf4305ec4f2171a4c4" exitCode=0 Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.017357 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vjpx" event={"ID":"202a5ad3-47a0-47cb-89fe-c01d2356e38f","Type":"ContainerDied","Data":"a21ec688c1403fe282d5e864903bb711ed9179ffd77dfdaf4305ec4f2171a4c4"} Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.018423 4903 generic.go:334] "Generic (PLEG): container finished" podID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerID="795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c" exitCode=0 Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.018450 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-954mb" event={"ID":"8bd3dd6e-5429-4193-8531-6ba1b357358f","Type":"ContainerDied","Data":"795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c"} Jan 28 15:48:14 crc kubenswrapper[4903]: W0128 15:48:14.023362 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f6c4494_66ec_40c7_960f_0ab4558af7d8.slice/crio-1c7fffc91456d9173ae16e69ac18b8c053f6a80217fda8fa0cc63778c9534d9f WatchSource:0}: Error finding container 1c7fffc91456d9173ae16e69ac18b8c053f6a80217fda8fa0cc63778c9534d9f: Status 404 returned error can't find the container with id 1c7fffc91456d9173ae16e69ac18b8c053f6a80217fda8fa0cc63778c9534d9f Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.024858 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.024902 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpq5\" (UniqueName: \"kubernetes.io/projected/20de9098-3be6-464b-b749-c2836ac0a896-kube-api-access-zzpq5\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.024946 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-utilities\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.024968 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-catalog-content\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.025112 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.52510205 +0000 UTC m=+166.801073561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.025323 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-catalog-content\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.025521 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-utilities\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.043439 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpq5\" (UniqueName: \"kubernetes.io/projected/20de9098-3be6-464b-b749-c2836ac0a896-kube-api-access-zzpq5\") pod \"redhat-marketplace-b7tqp\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.125724 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.125917 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.625893044 +0000 UTC m=+166.901864555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.126027 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.126336 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.626329196 +0000 UTC m=+166.902300707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.188664 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.227917 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.72786298 +0000 UTC m=+167.003834491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.227370 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.228255 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.228597 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.728589201 +0000 UTC m=+167.004560712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.329903 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.330788 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.830768794 +0000 UTC m=+167.106740305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.409729 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r7vcv"] Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.410691 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.414178 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.430925 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r7vcv"] Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.431620 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.431928 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:14.931916618 +0000 UTC m=+167.207888129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.502070 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7tqp"] Jan 28 15:48:14 crc kubenswrapper[4903]: W0128 15:48:14.502620 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20de9098_3be6_464b_b749_c2836ac0a896.slice/crio-b8f2e7a1d409052a98dd7987eb8a886b4edbff3ed6c85c3b110d5885021bc6d1 WatchSource:0}: Error finding container b8f2e7a1d409052a98dd7987eb8a886b4edbff3ed6c85c3b110d5885021bc6d1: Status 404 returned error can't find the container with id b8f2e7a1d409052a98dd7987eb8a886b4edbff3ed6c85c3b110d5885021bc6d1 Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.532852 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.533095 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-utilities\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.533172 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-catalog-content\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.533203 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrb7\" (UniqueName: \"kubernetes.io/projected/f3e140f0-9bf3-4817-af15-b215b941ba85-kube-api-access-tkrb7\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.533283 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.033270719 +0000 UTC m=+167.309242220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.634653 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.634906 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-catalog-content\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.634948 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrb7\" (UniqueName: \"kubernetes.io/projected/f3e140f0-9bf3-4817-af15-b215b941ba85-kube-api-access-tkrb7\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.634983 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-utilities\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.635124 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.135112662 +0000 UTC m=+167.411084173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.635403 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-catalog-content\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.635499 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-utilities\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.647137 4903 patch_prober.go:28] interesting pod/apiserver-76f77b778f-48dgn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]log ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]etcd ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/max-in-flight-filter ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 15:48:14 crc kubenswrapper[4903]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 15:48:14 crc kubenswrapper[4903]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 15:48:14 crc kubenswrapper[4903]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 15:48:14 crc kubenswrapper[4903]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 15:48:14 crc kubenswrapper[4903]: livez check failed Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.647205 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" podUID="cfd6fe6c-cdb9-4b41-a9f4-e245780116be" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.664624 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrb7\" (UniqueName: \"kubernetes.io/projected/f3e140f0-9bf3-4817-af15-b215b941ba85-kube-api-access-tkrb7\") pod \"redhat-operators-r7vcv\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.712559 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pvtfk" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.736135 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.736277 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.236251086 +0000 UTC m=+167.512222597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.736468 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.736782 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.236769971 +0000 UTC m=+167.512741482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.753521 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.809017 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gjhpx"] Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.810960 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.818312 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjhpx"] Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.838191 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.838804 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.338747819 +0000 UTC m=+167.614719330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.891166 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.898819 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-279w4" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.940165 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.940203 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndlzt\" (UniqueName: \"kubernetes.io/projected/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-kube-api-access-ndlzt\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.940248 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-catalog-content\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.940267 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-utilities\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:14 crc kubenswrapper[4903]: E0128 15:48:14.940514 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.440502529 +0000 UTC m=+167.716474040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.992654 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:14 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:14 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:14 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:14 crc kubenswrapper[4903]: I0128 15:48:14.992700 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.045996 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.046250 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndlzt\" (UniqueName: \"kubernetes.io/projected/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-kube-api-access-ndlzt\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.046340 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-catalog-content\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.046358 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-utilities\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.046962 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.546947884 +0000 UTC m=+167.822919395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.047938 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-catalog-content\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.048142 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-utilities\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.059811 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87z2" event={"ID":"6f6c4494-66ec-40c7-960f-0ab4558af7d8","Type":"ContainerStarted","Data":"1c7fffc91456d9173ae16e69ac18b8c053f6a80217fda8fa0cc63778c9534d9f"} Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.083637 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7tqp" event={"ID":"20de9098-3be6-464b-b749-c2836ac0a896","Type":"ContainerStarted","Data":"b8f2e7a1d409052a98dd7987eb8a886b4edbff3ed6c85c3b110d5885021bc6d1"} Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.084483 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndlzt\" (UniqueName: \"kubernetes.io/projected/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-kube-api-access-ndlzt\") pod \"redhat-operators-gjhpx\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.086906 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.124868 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r7vcv"] Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.141610 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.147993 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.148272 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.648260193 +0000 UTC m=+167.924231704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.249242 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.250547 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.750510568 +0000 UTC m=+168.026482079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.352013 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.352556 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.852514847 +0000 UTC m=+168.128486368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.372446 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjhpx"] Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.424352 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8v8wj" Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.453006 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.454273 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:15.954251647 +0000 UTC m=+168.230223168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.554593 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.554963 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.054945138 +0000 UTC m=+168.330916649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.656735 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.656958 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.156928976 +0000 UTC m=+168.432900487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.657121 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.657521 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.157505412 +0000 UTC m=+168.433476923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.757784 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.757964 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.257936986 +0000 UTC m=+168.533908497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.858865 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.859233 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.359216975 +0000 UTC m=+168.635188486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.960421 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.960675 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.460640986 +0000 UTC m=+168.736612497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.961008 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:15 crc kubenswrapper[4903]: E0128 15:48:15.961374 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.461366047 +0000 UTC m=+168.737337558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.964828 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:15 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:15 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:15 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:15 crc kubenswrapper[4903]: I0128 15:48:15.964897 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.061642 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.061869 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.561843611 +0000 UTC m=+168.837815122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.094674 4903 generic.go:334] "Generic (PLEG): container finished" podID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerID="b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f" exitCode=0 Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.094790 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87z2" event={"ID":"6f6c4494-66ec-40c7-960f-0ab4558af7d8","Type":"ContainerDied","Data":"b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f"} Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.096385 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjhpx" event={"ID":"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f","Type":"ContainerStarted","Data":"f471e04c52ac0a9dfc39d5af627b2d58b931174ab36cd8c5d0d2b35c6f095da6"} Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.097818 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r7vcv" event={"ID":"f3e140f0-9bf3-4817-af15-b215b941ba85","Type":"ContainerStarted","Data":"cdd5d25adbc67c076f801e199473f4a830e38020b61ff0814a644ffa985989ac"} Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.100183 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7tqp" event={"ID":"20de9098-3be6-464b-b749-c2836ac0a896","Type":"ContainerStarted","Data":"cfb2ad807009fc2fdf71202649f7c4c715d16b09e80e1393195bc35085a56eb0"} Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.163233 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.164253 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.6642095 +0000 UTC m=+168.940181011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.264485 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.264726 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.764685655 +0000 UTC m=+169.040657176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.264877 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.265371 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.765360515 +0000 UTC m=+169.041332026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.366018 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.366276 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.866228971 +0000 UTC m=+169.142200472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.366368 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.366727 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.866711254 +0000 UTC m=+169.142682765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.467810 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.468066 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.968027633 +0000 UTC m=+169.243999144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.468449 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.468925 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:16.968910068 +0000 UTC m=+169.244881579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.574762 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.574976 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.074946491 +0000 UTC m=+169.350918002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.576602 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.577058 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.07704334 +0000 UTC m=+169.353014851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.678153 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.678548 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.178477613 +0000 UTC m=+169.454449124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.679229 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.679778 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.17976922 +0000 UTC m=+169.455740731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.780057 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.780261 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.280209963 +0000 UTC m=+169.556181474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.780624 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.780963 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.280949535 +0000 UTC m=+169.556921046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.881273 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.881365 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.381348908 +0000 UTC m=+169.657320409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.881524 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.881847 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.381832801 +0000 UTC m=+169.657804312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.964055 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:16 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:16 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:16 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.964462 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.983226 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.983578 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.483505451 +0000 UTC m=+169.759476962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:16 crc kubenswrapper[4903]: I0128 15:48:16.983628 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:16 crc kubenswrapper[4903]: E0128 15:48:16.984000 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.483982934 +0000 UTC m=+169.759954445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.084995 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.085176 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.585150778 +0000 UTC m=+169.861122289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.085301 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.085630 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.585616581 +0000 UTC m=+169.861588102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.108024 4903 generic.go:334] "Generic (PLEG): container finished" podID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerID="7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc" exitCode=0 Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.108099 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjhpx" event={"ID":"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f","Type":"ContainerDied","Data":"7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc"} Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.111777 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" event={"ID":"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba","Type":"ContainerStarted","Data":"d394ba495b05a1d9867cb9e18248b784db2c6f218fca95c26457a52e531fede3"} Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.111818 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" event={"ID":"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba","Type":"ContainerStarted","Data":"955a90e481ef9c72626d4e9be2e5124010eaace25b0df6c3fa51b16afab4258e"} Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.118818 4903 generic.go:334] "Generic (PLEG): container finished" podID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerID="00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693" exitCode=0 Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.118921 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r7vcv" event={"ID":"f3e140f0-9bf3-4817-af15-b215b941ba85","Type":"ContainerDied","Data":"00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693"} Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.121843 4903 generic.go:334] "Generic (PLEG): container finished" podID="20de9098-3be6-464b-b749-c2836ac0a896" containerID="cfb2ad807009fc2fdf71202649f7c4c715d16b09e80e1393195bc35085a56eb0" exitCode=0 Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.121902 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7tqp" event={"ID":"20de9098-3be6-464b-b749-c2836ac0a896","Type":"ContainerDied","Data":"cfb2ad807009fc2fdf71202649f7c4c715d16b09e80e1393195bc35085a56eb0"} Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.124349 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kq2bn" event={"ID":"90b23d2e-fec0-494c-9a60-461cc16fe0ae","Type":"ContainerStarted","Data":"94f441e8531fa6c4c26af774529bde926e1e571e7f41066e086d95136062c0ee"} Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.186643 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.187517 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.687489446 +0000 UTC m=+169.963460977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.191507 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kq2bn" podStartSLOduration=148.19149102 podStartE2EDuration="2m28.19149102s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:17.191188921 +0000 UTC m=+169.467160432" watchObservedRunningTime="2026-01-28 15:48:17.19149102 +0000 UTC m=+169.467462521" Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.289005 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.289854 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.789838254 +0000 UTC m=+170.065809775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.391706 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.391923 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.891894564 +0000 UTC m=+170.167866075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.392298 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.392902 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.892881442 +0000 UTC m=+170.168852953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.469409 4903 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.497234 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.497421 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.997392802 +0000 UTC m=+170.273364303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.499360 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.499808 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:48:17.99978667 +0000 UTC m=+170.275758371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8t5gp" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.600559 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:17 crc kubenswrapper[4903]: E0128 15:48:17.600940 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:48:18.100923813 +0000 UTC m=+170.376895314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.612343 4903 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T15:48:17.469437748Z","Handler":null,"Name":""} Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.635200 4903 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.635274 4903 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.702808 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.821349 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.821412 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.889748 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8t5gp\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.904960 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.916621 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.964205 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:17 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:17 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:17 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:17 crc kubenswrapper[4903]: I0128 15:48:17.964315 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:18 crc kubenswrapper[4903]: I0128 15:48:18.109708 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:18 crc kubenswrapper[4903]: I0128 15:48:18.136599 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" event={"ID":"85bc5bb3-c08a-4c3a-b3d2-d33397a073ba","Type":"ContainerStarted","Data":"cfbec3f1f455cdd565e3f5ed250e99e2a543143bcc3df75e587b6cfc47a07d3c"} Jan 28 15:48:18 crc kubenswrapper[4903]: I0128 15:48:18.160155 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fz85j" podStartSLOduration=21.160123623 podStartE2EDuration="21.160123623s" podCreationTimestamp="2026-01-28 15:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:18.157136458 +0000 UTC m=+170.433107999" watchObservedRunningTime="2026-01-28 15:48:18.160123623 +0000 UTC m=+170.436095134" Jan 28 15:48:18 crc kubenswrapper[4903]: I0128 15:48:18.401947 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8t5gp"] Jan 28 15:48:18 crc kubenswrapper[4903]: I0128 15:48:18.422428 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 15:48:18 crc kubenswrapper[4903]: I0128 15:48:18.968497 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:18 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:18 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:18 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:18 crc kubenswrapper[4903]: I0128 15:48:18.968934 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.141909 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" event={"ID":"c1dff77d-5e58-42e0-bfac-040973ea3094","Type":"ContainerStarted","Data":"dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0"} Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.141948 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" event={"ID":"c1dff77d-5e58-42e0-bfac-040973ea3094","Type":"ContainerStarted","Data":"b02209a16439f41ba249bb856ea29c45ef95ea9016386b295c6c52a64b9c52e4"} Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.142019 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.163129 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" podStartSLOduration=150.163108832 podStartE2EDuration="2m30.163108832s" podCreationTimestamp="2026-01-28 15:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:48:19.161365302 +0000 UTC m=+171.437336823" watchObservedRunningTime="2026-01-28 15:48:19.163108832 +0000 UTC m=+171.439080333" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.602198 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.607834 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-48dgn" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.636447 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.636502 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.636514 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.636599 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.733667 4903 patch_prober.go:28] interesting pod/console-f9d7485db-522t5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.733755 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-522t5" podUID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.963677 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:19 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:19 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:19 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:19 crc kubenswrapper[4903]: I0128 15:48:19.963757 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:20 crc kubenswrapper[4903]: I0128 15:48:20.216081 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zxr6z" Jan 28 15:48:20 crc kubenswrapper[4903]: I0128 15:48:20.964916 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:20 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:20 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:20 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:20 crc kubenswrapper[4903]: I0128 15:48:20.965501 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:21 crc kubenswrapper[4903]: I0128 15:48:21.963500 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:21 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:21 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:21 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:21 crc kubenswrapper[4903]: I0128 15:48:21.963562 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:22 crc kubenswrapper[4903]: I0128 15:48:22.963229 4903 patch_prober.go:28] interesting pod/router-default-5444994796-kr4qg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:48:22 crc kubenswrapper[4903]: [-]has-synced failed: reason withheld Jan 28 15:48:22 crc kubenswrapper[4903]: [+]process-running ok Jan 28 15:48:22 crc kubenswrapper[4903]: healthz check failed Jan 28 15:48:22 crc kubenswrapper[4903]: I0128 15:48:22.963301 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kr4qg" podUID="25dd11d8-a217-40ac-8d11-03b28106776c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:48:23 crc kubenswrapper[4903]: I0128 15:48:23.963584 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:48:23 crc kubenswrapper[4903]: I0128 15:48:23.967211 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kr4qg" Jan 28 15:48:26 crc kubenswrapper[4903]: I0128 15:48:26.613678 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:48:26 crc kubenswrapper[4903]: I0128 15:48:26.614028 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.636802 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.636827 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.637376 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.637424 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.637421 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.637781 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.637811 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.638081 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"834ad15bcc5f77d3b2af9b49589a84c43c28e1216c1e6f738f89a07f58bf44db"} pod="openshift-console/downloads-7954f5f757-tcmkg" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.638164 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" containerID="cri-o://834ad15bcc5f77d3b2af9b49589a84c43c28e1216c1e6f738f89a07f58bf44db" gracePeriod=2 Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.737476 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:48:29 crc kubenswrapper[4903]: I0128 15:48:29.741637 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:48:32 crc kubenswrapper[4903]: I0128 15:48:32.234990 4903 generic.go:334] "Generic (PLEG): container finished" podID="a1c4af21-1253-4476-8f98-98377ab79e81" containerID="834ad15bcc5f77d3b2af9b49589a84c43c28e1216c1e6f738f89a07f58bf44db" exitCode=0 Jan 28 15:48:32 crc kubenswrapper[4903]: I0128 15:48:32.235084 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcmkg" event={"ID":"a1c4af21-1253-4476-8f98-98377ab79e81","Type":"ContainerDied","Data":"834ad15bcc5f77d3b2af9b49589a84c43c28e1216c1e6f738f89a07f58bf44db"} Jan 28 15:48:36 crc kubenswrapper[4903]: I0128 15:48:36.806548 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:48:38 crc kubenswrapper[4903]: I0128 15:48:38.118694 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:48:39 crc kubenswrapper[4903]: I0128 15:48:39.640063 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:39 crc kubenswrapper[4903]: I0128 15:48:39.640427 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:40 crc kubenswrapper[4903]: I0128 15:48:40.266069 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vxzhf" Jan 28 15:48:49 crc kubenswrapper[4903]: I0128 15:48:49.636716 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:49 crc kubenswrapper[4903]: I0128 15:48:49.637246 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.382813 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.387319 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.389624 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.390201 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.394193 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.456506 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.456633 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.557642 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.557698 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.557805 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.576150 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:52 crc kubenswrapper[4903]: I0128 15:48:52.725758 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:48:56 crc kubenswrapper[4903]: I0128 15:48:56.613593 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:48:56 crc kubenswrapper[4903]: I0128 15:48:56.613857 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:48:56 crc kubenswrapper[4903]: I0128 15:48:56.613899 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:48:56 crc kubenswrapper[4903]: I0128 15:48:56.614405 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:48:56 crc kubenswrapper[4903]: I0128 15:48:56.614477 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621" gracePeriod=600 Jan 28 15:48:57 crc kubenswrapper[4903]: I0128 15:48:57.970307 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:48:57 crc kubenswrapper[4903]: I0128 15:48:57.971186 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:57 crc kubenswrapper[4903]: I0128 15:48:57.986083 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.130158 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be3200d9-3341-4a0b-a717-44311b50b23f-kube-api-access\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.130873 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.131091 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-var-lock\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.232281 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be3200d9-3341-4a0b-a717-44311b50b23f-kube-api-access\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.232773 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.232926 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-var-lock\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.233030 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-var-lock\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.232824 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.251063 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be3200d9-3341-4a0b-a717-44311b50b23f-kube-api-access\") pod \"installer-9-crc\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: I0128 15:48:58.297963 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:48:58 crc kubenswrapper[4903]: E0128 15:48:58.407392 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:48:58 crc kubenswrapper[4903]: E0128 15:48:58.407600 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzpq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b7tqp_openshift-marketplace(20de9098-3be6-464b-b749-c2836ac0a896): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:48:58 crc kubenswrapper[4903]: E0128 15:48:58.409104 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b7tqp" podUID="20de9098-3be6-464b-b749-c2836ac0a896" Jan 28 15:48:59 crc kubenswrapper[4903]: I0128 15:48:59.636413 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:48:59 crc kubenswrapper[4903]: I0128 15:48:59.636511 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:03 crc kubenswrapper[4903]: E0128 15:49:03.840968 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:49:03 crc kubenswrapper[4903]: E0128 15:49:03.841697 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjdhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-j87z2_openshift-marketplace(6f6c4494-66ec-40c7-960f-0ab4558af7d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\": context canceled" logger="UnhandledError" Jan 28 15:49:03 crc kubenswrapper[4903]: E0128 15:49:03.842966 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-j87z2" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" Jan 28 15:49:08 crc kubenswrapper[4903]: E0128 15:49:08.969505 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:49:08 crc kubenswrapper[4903]: E0128 15:49:08.970006 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9mjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-954mb_openshift-marketplace(8bd3dd6e-5429-4193-8531-6ba1b357358f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:49:08 crc kubenswrapper[4903]: E0128 15:49:08.971190 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-954mb" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" Jan 28 15:49:09 crc kubenswrapper[4903]: I0128 15:49:09.456147 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621" exitCode=0 Jan 28 15:49:09 crc kubenswrapper[4903]: I0128 15:49:09.456190 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621"} Jan 28 15:49:09 crc kubenswrapper[4903]: I0128 15:49:09.636724 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:49:09 crc kubenswrapper[4903]: I0128 15:49:09.636789 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:17 crc kubenswrapper[4903]: E0128 15:49:17.976769 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:49:17 crc kubenswrapper[4903]: E0128 15:49:17.977337 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkrb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r7vcv_openshift-marketplace(f3e140f0-9bf3-4817-af15-b215b941ba85): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:49:17 crc kubenswrapper[4903]: E0128 15:49:17.978601 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r7vcv" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.184759 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-r7vcv" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.219442 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.219646 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6lj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4gbsl_openshift-marketplace(1c684124-cb30-4db3-9ece-fd4baa23a639): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.220843 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4gbsl" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.227996 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.228225 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lvtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6vjpx_openshift-marketplace(202a5ad3-47a0-47cb-89fe-c01d2356e38f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.230352 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6vjpx" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.278765 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.278910 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqvm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gw84l_openshift-marketplace(97923485-cab3-4578-ae02-4489827d63ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.280383 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gw84l" podUID="97923485-cab3-4578-ae02-4489827d63ae" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.358522 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.359023 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndlzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gjhpx_openshift-marketplace(ce40a6c4-bba4-43dc-8aa7-3a63fd44447f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.360272 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gjhpx" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.461634 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.519964 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tcmkg" event={"ID":"a1c4af21-1253-4476-8f98-98377ab79e81","Type":"ContainerStarted","Data":"abcc02c3970cf0af14f5f9f095226026ac210e97b4c822adf9f8d084a0433cdb"} Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.520230 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.522428 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.522484 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.536752 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bff95f2d-4408-4d1e-afcc-d3302a406ff4","Type":"ContainerStarted","Data":"a2e36a11a2442f7a84148e59078f92fe26a2ac546147ac84315b7ea865a9e4c5"} Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.540655 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"4997084f57a6cd366ada9b77ed2b50e6809e074fd29397f82383459cfec25834"} Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.542778 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gjhpx" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" Jan 28 15:49:19 crc kubenswrapper[4903]: E0128 15:49:19.543013 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6vjpx" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" Jan 28 15:49:19 crc kubenswrapper[4903]: W0128 15:49:19.545382 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbe3200d9_3341_4a0b_a717_44311b50b23f.slice/crio-3957a846d3fa6a80e0512766bcc29a3043a5947a9a326068cb4a8847446c996c WatchSource:0}: Error finding container 3957a846d3fa6a80e0512766bcc29a3043a5947a9a326068cb4a8847446c996c: Status 404 returned error can't find the container with id 3957a846d3fa6a80e0512766bcc29a3043a5947a9a326068cb4a8847446c996c Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.546317 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.635793 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.635822 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.635859 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:19 crc kubenswrapper[4903]: I0128 15:49:19.635874 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.546805 4903 generic.go:334] "Generic (PLEG): container finished" podID="20de9098-3be6-464b-b749-c2836ac0a896" containerID="3ecbad30fb66567d3785c928043f9fdc435c884bacb1225fb7445a651d87d07f" exitCode=0 Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.546920 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7tqp" event={"ID":"20de9098-3be6-464b-b749-c2836ac0a896","Type":"ContainerDied","Data":"3ecbad30fb66567d3785c928043f9fdc435c884bacb1225fb7445a651d87d07f"} Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.552311 4903 generic.go:334] "Generic (PLEG): container finished" podID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerID="17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d" exitCode=0 Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.552412 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87z2" event={"ID":"6f6c4494-66ec-40c7-960f-0ab4558af7d8","Type":"ContainerDied","Data":"17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d"} Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.554135 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bff95f2d-4408-4d1e-afcc-d3302a406ff4","Type":"ContainerStarted","Data":"a4b55c500e9e298ec5a1094b6d56e676f1501ba69e9c477c0f1f4df323a3ce8f"} Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.557253 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"be3200d9-3341-4a0b-a717-44311b50b23f","Type":"ContainerStarted","Data":"79144ee4f3f8f7e259f46305abbdf4899b71a94b5ab60442183e983c409e5578"} Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.557304 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"be3200d9-3341-4a0b-a717-44311b50b23f","Type":"ContainerStarted","Data":"3957a846d3fa6a80e0512766bcc29a3043a5947a9a326068cb4a8847446c996c"} Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.558089 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.558188 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.587597 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=28.587569854 podStartE2EDuration="28.587569854s" podCreationTimestamp="2026-01-28 15:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:49:20.583159937 +0000 UTC m=+232.859131448" watchObservedRunningTime="2026-01-28 15:49:20.587569854 +0000 UTC m=+232.863541375" Jan 28 15:49:20 crc kubenswrapper[4903]: I0128 15:49:20.622949 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=23.622927639 podStartE2EDuration="23.622927639s" podCreationTimestamp="2026-01-28 15:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:49:20.616590686 +0000 UTC m=+232.892562197" watchObservedRunningTime="2026-01-28 15:49:20.622927639 +0000 UTC m=+232.898899150" Jan 28 15:49:21 crc kubenswrapper[4903]: I0128 15:49:21.563554 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-954mb" event={"ID":"8bd3dd6e-5429-4193-8531-6ba1b357358f","Type":"ContainerStarted","Data":"0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f"} Jan 28 15:49:21 crc kubenswrapper[4903]: I0128 15:49:21.567329 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7tqp" event={"ID":"20de9098-3be6-464b-b749-c2836ac0a896","Type":"ContainerStarted","Data":"c845f8a514133babdab891fc478f2855207fdeb7d88a6a92322563c27bfb582b"} Jan 28 15:49:21 crc kubenswrapper[4903]: I0128 15:49:21.570049 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87z2" event={"ID":"6f6c4494-66ec-40c7-960f-0ab4558af7d8","Type":"ContainerStarted","Data":"88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d"} Jan 28 15:49:21 crc kubenswrapper[4903]: I0128 15:49:21.573517 4903 generic.go:334] "Generic (PLEG): container finished" podID="bff95f2d-4408-4d1e-afcc-d3302a406ff4" containerID="a4b55c500e9e298ec5a1094b6d56e676f1501ba69e9c477c0f1f4df323a3ce8f" exitCode=0 Jan 28 15:49:21 crc kubenswrapper[4903]: I0128 15:49:21.573985 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bff95f2d-4408-4d1e-afcc-d3302a406ff4","Type":"ContainerDied","Data":"a4b55c500e9e298ec5a1094b6d56e676f1501ba69e9c477c0f1f4df323a3ce8f"} Jan 28 15:49:21 crc kubenswrapper[4903]: I0128 15:49:21.625244 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b7tqp" podStartSLOduration=4.770372323 podStartE2EDuration="1m8.625222882s" podCreationTimestamp="2026-01-28 15:48:13 +0000 UTC" firstStartedPulling="2026-01-28 15:48:17.123973951 +0000 UTC m=+169.399945482" lastFinishedPulling="2026-01-28 15:49:20.9788245 +0000 UTC m=+233.254796041" observedRunningTime="2026-01-28 15:49:21.621076722 +0000 UTC m=+233.897048243" watchObservedRunningTime="2026-01-28 15:49:21.625222882 +0000 UTC m=+233.901194393" Jan 28 15:49:21 crc kubenswrapper[4903]: I0128 15:49:21.625603 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j87z2" podStartSLOduration=4.785379187 podStartE2EDuration="1m8.625595243s" podCreationTimestamp="2026-01-28 15:48:13 +0000 UTC" firstStartedPulling="2026-01-28 15:48:17.127481471 +0000 UTC m=+169.403452982" lastFinishedPulling="2026-01-28 15:49:20.967697527 +0000 UTC m=+233.243669038" observedRunningTime="2026-01-28 15:49:21.605947164 +0000 UTC m=+233.881918675" watchObservedRunningTime="2026-01-28 15:49:21.625595243 +0000 UTC m=+233.901566754" Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.580002 4903 generic.go:334] "Generic (PLEG): container finished" podID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerID="0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f" exitCode=0 Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.580081 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-954mb" event={"ID":"8bd3dd6e-5429-4193-8531-6ba1b357358f","Type":"ContainerDied","Data":"0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f"} Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.807185 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.870665 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kubelet-dir\") pod \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.870793 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bff95f2d-4408-4d1e-afcc-d3302a406ff4" (UID: "bff95f2d-4408-4d1e-afcc-d3302a406ff4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.870850 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kube-api-access\") pod \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\" (UID: \"bff95f2d-4408-4d1e-afcc-d3302a406ff4\") " Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.871112 4903 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.884758 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bff95f2d-4408-4d1e-afcc-d3302a406ff4" (UID: "bff95f2d-4408-4d1e-afcc-d3302a406ff4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:49:22 crc kubenswrapper[4903]: I0128 15:49:22.972899 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bff95f2d-4408-4d1e-afcc-d3302a406ff4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:23 crc kubenswrapper[4903]: I0128 15:49:23.604748 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bff95f2d-4408-4d1e-afcc-d3302a406ff4","Type":"ContainerDied","Data":"a2e36a11a2442f7a84148e59078f92fe26a2ac546147ac84315b7ea865a9e4c5"} Jan 28 15:49:23 crc kubenswrapper[4903]: I0128 15:49:23.605099 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2e36a11a2442f7a84148e59078f92fe26a2ac546147ac84315b7ea865a9e4c5" Jan 28 15:49:23 crc kubenswrapper[4903]: I0128 15:49:23.604823 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:49:23 crc kubenswrapper[4903]: I0128 15:49:23.617877 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-954mb" event={"ID":"8bd3dd6e-5429-4193-8531-6ba1b357358f","Type":"ContainerStarted","Data":"d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f"} Jan 28 15:49:23 crc kubenswrapper[4903]: I0128 15:49:23.731343 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:49:23 crc kubenswrapper[4903]: I0128 15:49:23.731610 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:49:24 crc kubenswrapper[4903]: I0128 15:49:24.189007 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:49:24 crc kubenswrapper[4903]: I0128 15:49:24.189454 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:49:24 crc kubenswrapper[4903]: I0128 15:49:24.292900 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:49:24 crc kubenswrapper[4903]: I0128 15:49:24.293974 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:49:24 crc kubenswrapper[4903]: I0128 15:49:24.328775 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-954mb" podStartSLOduration=5.336124211 podStartE2EDuration="1m13.328758283s" podCreationTimestamp="2026-01-28 15:48:11 +0000 UTC" firstStartedPulling="2026-01-28 15:48:15.086425076 +0000 UTC m=+167.362396587" lastFinishedPulling="2026-01-28 15:49:23.079059148 +0000 UTC m=+235.355030659" observedRunningTime="2026-01-28 15:49:23.642416792 +0000 UTC m=+235.918388303" watchObservedRunningTime="2026-01-28 15:49:24.328758283 +0000 UTC m=+236.604729784" Jan 28 15:49:29 crc kubenswrapper[4903]: I0128 15:49:29.636145 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:49:29 crc kubenswrapper[4903]: I0128 15:49:29.637063 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:29 crc kubenswrapper[4903]: I0128 15:49:29.636145 4903 patch_prober.go:28] interesting pod/downloads-7954f5f757-tcmkg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 15:49:29 crc kubenswrapper[4903]: I0128 15:49:29.637137 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tcmkg" podUID="a1c4af21-1253-4476-8f98-98377ab79e81" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 15:49:31 crc kubenswrapper[4903]: I0128 15:49:31.725636 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:49:31 crc kubenswrapper[4903]: I0128 15:49:31.725978 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:49:31 crc kubenswrapper[4903]: I0128 15:49:31.771494 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:49:32 crc kubenswrapper[4903]: I0128 15:49:32.722889 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:49:33 crc kubenswrapper[4903]: I0128 15:49:33.778683 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:49:34 crc kubenswrapper[4903]: I0128 15:49:34.235073 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:49:36 crc kubenswrapper[4903]: I0128 15:49:36.039423 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7tqp"] Jan 28 15:49:36 crc kubenswrapper[4903]: I0128 15:49:36.039656 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b7tqp" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="registry-server" containerID="cri-o://c845f8a514133babdab891fc478f2855207fdeb7d88a6a92322563c27bfb582b" gracePeriod=2 Jan 28 15:49:37 crc kubenswrapper[4903]: I0128 15:49:37.697158 4903 generic.go:334] "Generic (PLEG): container finished" podID="20de9098-3be6-464b-b749-c2836ac0a896" containerID="c845f8a514133babdab891fc478f2855207fdeb7d88a6a92322563c27bfb582b" exitCode=0 Jan 28 15:49:37 crc kubenswrapper[4903]: I0128 15:49:37.697193 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7tqp" event={"ID":"20de9098-3be6-464b-b749-c2836ac0a896","Type":"ContainerDied","Data":"c845f8a514133babdab891fc478f2855207fdeb7d88a6a92322563c27bfb582b"} Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.266005 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.404930 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-utilities\") pod \"20de9098-3be6-464b-b749-c2836ac0a896\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.405040 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-catalog-content\") pod \"20de9098-3be6-464b-b749-c2836ac0a896\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.405096 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzpq5\" (UniqueName: \"kubernetes.io/projected/20de9098-3be6-464b-b749-c2836ac0a896-kube-api-access-zzpq5\") pod \"20de9098-3be6-464b-b749-c2836ac0a896\" (UID: \"20de9098-3be6-464b-b749-c2836ac0a896\") " Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.410792 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-utilities" (OuterVolumeSpecName: "utilities") pod "20de9098-3be6-464b-b749-c2836ac0a896" (UID: "20de9098-3be6-464b-b749-c2836ac0a896"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.415990 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20de9098-3be6-464b-b749-c2836ac0a896-kube-api-access-zzpq5" (OuterVolumeSpecName: "kube-api-access-zzpq5") pod "20de9098-3be6-464b-b749-c2836ac0a896" (UID: "20de9098-3be6-464b-b749-c2836ac0a896"). InnerVolumeSpecName "kube-api-access-zzpq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.437204 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20de9098-3be6-464b-b749-c2836ac0a896" (UID: "20de9098-3be6-464b-b749-c2836ac0a896"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.506982 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.507059 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de9098-3be6-464b-b749-c2836ac0a896-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.507177 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzpq5\" (UniqueName: \"kubernetes.io/projected/20de9098-3be6-464b-b749-c2836ac0a896-kube-api-access-zzpq5\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.654239 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tcmkg" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.715082 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerStarted","Data":"2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1"} Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.718179 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjhpx" event={"ID":"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f","Type":"ContainerStarted","Data":"4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe"} Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.723024 4903 generic.go:334] "Generic (PLEG): container finished" podID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerID="724e115353d927eabd5c3513bd3e7179c00f6dc46ef50c76708b96c9791b23b1" exitCode=0 Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.723077 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vjpx" event={"ID":"202a5ad3-47a0-47cb-89fe-c01d2356e38f","Type":"ContainerDied","Data":"724e115353d927eabd5c3513bd3e7179c00f6dc46ef50c76708b96c9791b23b1"} Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.726662 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r7vcv" event={"ID":"f3e140f0-9bf3-4817-af15-b215b941ba85","Type":"ContainerStarted","Data":"a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8"} Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.728976 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7tqp" event={"ID":"20de9098-3be6-464b-b749-c2836ac0a896","Type":"ContainerDied","Data":"b8f2e7a1d409052a98dd7987eb8a886b4edbff3ed6c85c3b110d5885021bc6d1"} Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.729009 4903 scope.go:117] "RemoveContainer" containerID="c845f8a514133babdab891fc478f2855207fdeb7d88a6a92322563c27bfb582b" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.729101 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7tqp" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.737928 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbsl" event={"ID":"1c684124-cb30-4db3-9ece-fd4baa23a639","Type":"ContainerStarted","Data":"a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd"} Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.799899 4903 scope.go:117] "RemoveContainer" containerID="3ecbad30fb66567d3785c928043f9fdc435c884bacb1225fb7445a651d87d07f" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.873258 4903 scope.go:117] "RemoveContainer" containerID="cfb2ad807009fc2fdf71202649f7c4c715d16b09e80e1393195bc35085a56eb0" Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.883583 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7tqp"] Jan 28 15:49:39 crc kubenswrapper[4903]: I0128 15:49:39.888576 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7tqp"] Jan 28 15:49:40 crc kubenswrapper[4903]: I0128 15:49:40.432010 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20de9098-3be6-464b-b749-c2836ac0a896" path="/var/lib/kubelet/pods/20de9098-3be6-464b-b749-c2836ac0a896/volumes" Jan 28 15:49:40 crc kubenswrapper[4903]: I0128 15:49:40.746524 4903 generic.go:334] "Generic (PLEG): container finished" podID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerID="a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd" exitCode=0 Jan 28 15:49:40 crc kubenswrapper[4903]: I0128 15:49:40.746600 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbsl" event={"ID":"1c684124-cb30-4db3-9ece-fd4baa23a639","Type":"ContainerDied","Data":"a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd"} Jan 28 15:49:40 crc kubenswrapper[4903]: I0128 15:49:40.748972 4903 generic.go:334] "Generic (PLEG): container finished" podID="97923485-cab3-4578-ae02-4489827d63ae" containerID="2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1" exitCode=0 Jan 28 15:49:40 crc kubenswrapper[4903]: I0128 15:49:40.749161 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerDied","Data":"2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1"} Jan 28 15:49:40 crc kubenswrapper[4903]: I0128 15:49:40.751913 4903 generic.go:334] "Generic (PLEG): container finished" podID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerID="a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8" exitCode=0 Jan 28 15:49:40 crc kubenswrapper[4903]: I0128 15:49:40.751935 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r7vcv" event={"ID":"f3e140f0-9bf3-4817-af15-b215b941ba85","Type":"ContainerDied","Data":"a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8"} Jan 28 15:49:41 crc kubenswrapper[4903]: I0128 15:49:41.761986 4903 generic.go:334] "Generic (PLEG): container finished" podID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerID="4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe" exitCode=0 Jan 28 15:49:41 crc kubenswrapper[4903]: I0128 15:49:41.762029 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjhpx" event={"ID":"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f","Type":"ContainerDied","Data":"4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe"} Jan 28 15:49:48 crc kubenswrapper[4903]: I0128 15:49:48.801597 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vjpx" event={"ID":"202a5ad3-47a0-47cb-89fe-c01d2356e38f","Type":"ContainerStarted","Data":"4bc1010ca577b827412e74a4177b4c92b4b48041fc4851302dbc311545f30ed9"} Jan 28 15:49:49 crc kubenswrapper[4903]: I0128 15:49:49.832974 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vjpx" podStartSLOduration=9.378681126 podStartE2EDuration="1m38.832952105s" podCreationTimestamp="2026-01-28 15:48:11 +0000 UTC" firstStartedPulling="2026-01-28 15:48:16.103878146 +0000 UTC m=+168.379849657" lastFinishedPulling="2026-01-28 15:49:45.558149095 +0000 UTC m=+257.834120636" observedRunningTime="2026-01-28 15:49:49.827387556 +0000 UTC m=+262.103359077" watchObservedRunningTime="2026-01-28 15:49:49.832952105 +0000 UTC m=+262.108923626" Jan 28 15:49:52 crc kubenswrapper[4903]: I0128 15:49:52.003096 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:49:52 crc kubenswrapper[4903]: I0128 15:49:52.003643 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:49:52 crc kubenswrapper[4903]: I0128 15:49:52.062759 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:49:52 crc kubenswrapper[4903]: I0128 15:49:52.883479 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:49:52 crc kubenswrapper[4903]: I0128 15:49:52.940786 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vjpx"] Jan 28 15:49:54 crc kubenswrapper[4903]: I0128 15:49:54.839731 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6vjpx" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="registry-server" containerID="cri-o://4bc1010ca577b827412e74a4177b4c92b4b48041fc4851302dbc311545f30ed9" gracePeriod=2 Jan 28 15:49:55 crc kubenswrapper[4903]: I0128 15:49:55.848884 4903 generic.go:334] "Generic (PLEG): container finished" podID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerID="4bc1010ca577b827412e74a4177b4c92b4b48041fc4851302dbc311545f30ed9" exitCode=0 Jan 28 15:49:55 crc kubenswrapper[4903]: I0128 15:49:55.849028 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vjpx" event={"ID":"202a5ad3-47a0-47cb-89fe-c01d2356e38f","Type":"ContainerDied","Data":"4bc1010ca577b827412e74a4177b4c92b4b48041fc4851302dbc311545f30ed9"} Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.137900 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.152037 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lvtx\" (UniqueName: \"kubernetes.io/projected/202a5ad3-47a0-47cb-89fe-c01d2356e38f-kube-api-access-6lvtx\") pod \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.152132 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-catalog-content\") pod \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.152189 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-utilities\") pod \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\" (UID: \"202a5ad3-47a0-47cb-89fe-c01d2356e38f\") " Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.153993 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-utilities" (OuterVolumeSpecName: "utilities") pod "202a5ad3-47a0-47cb-89fe-c01d2356e38f" (UID: "202a5ad3-47a0-47cb-89fe-c01d2356e38f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.164109 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/202a5ad3-47a0-47cb-89fe-c01d2356e38f-kube-api-access-6lvtx" (OuterVolumeSpecName: "kube-api-access-6lvtx") pod "202a5ad3-47a0-47cb-89fe-c01d2356e38f" (UID: "202a5ad3-47a0-47cb-89fe-c01d2356e38f"). InnerVolumeSpecName "kube-api-access-6lvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.218944 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "202a5ad3-47a0-47cb-89fe-c01d2356e38f" (UID: "202a5ad3-47a0-47cb-89fe-c01d2356e38f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.253585 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lvtx\" (UniqueName: \"kubernetes.io/projected/202a5ad3-47a0-47cb-89fe-c01d2356e38f-kube-api-access-6lvtx\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.253617 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.253626 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/202a5ad3-47a0-47cb-89fe-c01d2356e38f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691094 4903 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.691363 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="extract-content" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691381 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="extract-content" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.691395 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="registry-server" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691404 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="registry-server" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.691425 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="extract-content" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691433 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="extract-content" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.691446 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="extract-utilities" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691454 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="extract-utilities" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.691471 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff95f2d-4408-4d1e-afcc-d3302a406ff4" containerName="pruner" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691479 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff95f2d-4408-4d1e-afcc-d3302a406ff4" containerName="pruner" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.691492 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="extract-utilities" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691500 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="extract-utilities" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.691512 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="registry-server" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691520 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="registry-server" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691659 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" containerName="registry-server" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691676 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="20de9098-3be6-464b-b749-c2836ac0a896" containerName="registry-server" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.691695 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff95f2d-4408-4d1e-afcc-d3302a406ff4" containerName="pruner" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.692110 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.692445 4903 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.692855 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a" gracePeriod=15 Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.693066 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2" gracePeriod=15 Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.693207 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e" gracePeriod=15 Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.693336 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a" gracePeriod=15 Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.693471 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02" gracePeriod=15 Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.693969 4903 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.694443 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.694507 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.694637 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.694697 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.694761 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.694827 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.694886 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.694950 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.695046 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.696471 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.696570 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.696645 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:49:57 crc kubenswrapper[4903]: E0128 15:49:57.696707 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.696763 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.696928 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.696995 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.697272 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.697377 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.697452 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.697518 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.753443 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.761763 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.761855 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.761912 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.762710 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.762748 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.762788 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.762810 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.762833 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.868737 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.868819 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.868848 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.868891 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.868926 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.868949 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869001 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869022 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869119 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869177 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869210 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869237 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869263 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869293 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869320 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.869385 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.877715 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vjpx" event={"ID":"202a5ad3-47a0-47cb-89fe-c01d2356e38f","Type":"ContainerDied","Data":"b63fb20832975d1a296a8f06f2619484357ab590d7fa2d35070b2164969e4b20"} Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.877791 4903 scope.go:117] "RemoveContainer" containerID="4bc1010ca577b827412e74a4177b4c92b4b48041fc4851302dbc311545f30ed9" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.877995 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vjpx" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.879443 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.880900 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.881595 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.883317 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.884988 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.885706 4903 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02" exitCode=2 Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.896241 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.896461 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:57 crc kubenswrapper[4903]: I0128 15:49:57.896671 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.040897 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.159113 4903 scope.go:117] "RemoveContainer" containerID="724e115353d927eabd5c3513bd3e7179c00f6dc46ef50c76708b96c9791b23b1" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.159148 4903 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.251:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-gw84l.188eefd42c4b9426 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-gw84l,UID:97923485-cab3-4578-ae02-4489827d63ae,APIVersion:v1,ResourceVersion:28622,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 17.408s (17.408s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:49:58.158439462 +0000 UTC m=+270.434410973,LastTimestamp:2026-01-28 15:49:58.158439462 +0000 UTC m=+270.434410973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.196812 4903 scope.go:117] "RemoveContainer" containerID="a21ec688c1403fe282d5e864903bb711ed9179ffd77dfdaf4305ec4f2171a4c4" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.417097 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.417686 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.418043 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.560269 4903 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.560673 4903 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.561144 4903 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.561351 4903 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.561510 4903 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.561547 4903 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.561717 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="200ms" Jan 28 15:49:58 crc kubenswrapper[4903]: E0128 15:49:58.762838 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="400ms" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.894777 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"df120e2fd7f20194112b6d39337b93064c1c54d9003364592ecc56b9eac48f4d"} Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.898223 4903 generic.go:334] "Generic (PLEG): container finished" podID="be3200d9-3341-4a0b-a717-44311b50b23f" containerID="79144ee4f3f8f7e259f46305abbdf4899b71a94b5ab60442183e983c409e5578" exitCode=0 Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.898345 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"be3200d9-3341-4a0b-a717-44311b50b23f","Type":"ContainerDied","Data":"79144ee4f3f8f7e259f46305abbdf4899b71a94b5ab60442183e983c409e5578"} Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.898980 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.899281 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.899613 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.900672 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.901879 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.902597 4903 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2" exitCode=0 Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.902623 4903 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e" exitCode=0 Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.902635 4903 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a" exitCode=0 Jan 28 15:49:58 crc kubenswrapper[4903]: I0128 15:49:58.902671 4903 scope.go:117] "RemoveContainer" containerID="0f2147e13665ef5cc771f6349d7ba8b714390a2cd11597d10191ad0c675fd0cf" Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.164231 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="800ms" Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.687903 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:49:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:49:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:49:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:49:59Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:2c1439ebdda893daf377def2d4397762658d82b531bb83f7ae41a4e7f26d4407\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c044fa5dc076cb0fb053c5a676c39093e5fd06f6cc0eeaff8a747680c99c8b7f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675724519},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:68c28a690c4c3482a63d6de9cf3b80304e983243444eb4d2c5fcaf5c051eb54b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a273081c72178c20c79eca9b18dbb926d33a6bb826b215c14de6b31207e497ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:364f5956de22b63db7dad4fcdd1f2740f71a482026c15aa3e2abebfbc5bf2fd7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d3d262f90dd0f3c3f809b45f327ca086741a47f73e44560b04787609f0f99567\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.688613 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.689092 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.689628 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.690110 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.690143 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:49:59 crc kubenswrapper[4903]: I0128 15:49:59.912098 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbsl" event={"ID":"1c684124-cb30-4db3-9ece-fd4baa23a639","Type":"ContainerStarted","Data":"17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c"} Jan 28 15:49:59 crc kubenswrapper[4903]: E0128 15:49:59.965615 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="1.6s" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.140584 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.141654 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.142066 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.142344 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.210629 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-var-lock\") pod \"be3200d9-3341-4a0b-a717-44311b50b23f\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.210756 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be3200d9-3341-4a0b-a717-44311b50b23f-kube-api-access\") pod \"be3200d9-3341-4a0b-a717-44311b50b23f\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.210837 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-kubelet-dir\") pod \"be3200d9-3341-4a0b-a717-44311b50b23f\" (UID: \"be3200d9-3341-4a0b-a717-44311b50b23f\") " Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.210878 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-var-lock" (OuterVolumeSpecName: "var-lock") pod "be3200d9-3341-4a0b-a717-44311b50b23f" (UID: "be3200d9-3341-4a0b-a717-44311b50b23f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.211061 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "be3200d9-3341-4a0b-a717-44311b50b23f" (UID: "be3200d9-3341-4a0b-a717-44311b50b23f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.211120 4903 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.217418 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3200d9-3341-4a0b-a717-44311b50b23f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "be3200d9-3341-4a0b-a717-44311b50b23f" (UID: "be3200d9-3341-4a0b-a717-44311b50b23f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.312434 4903 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be3200d9-3341-4a0b-a717-44311b50b23f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.312488 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be3200d9-3341-4a0b-a717-44311b50b23f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.591139 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.592127 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.592613 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.592778 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.592971 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.593185 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.716088 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.716165 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.716194 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.716432 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.716462 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.716478 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.817512 4903 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.817566 4903 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.817576 4903 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.919696 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"be3200d9-3341-4a0b-a717-44311b50b23f","Type":"ContainerDied","Data":"3957a846d3fa6a80e0512766bcc29a3043a5947a9a326068cb4a8847446c996c"} Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.919744 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3957a846d3fa6a80e0512766bcc29a3043a5947a9a326068cb4a8847446c996c" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.919769 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.922169 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r7vcv" event={"ID":"f3e140f0-9bf3-4817-af15-b215b941ba85","Type":"ContainerStarted","Data":"4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92"} Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.922830 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.923259 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.923687 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.924098 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.924360 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.924674 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.924953 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.925189 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.925420 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.925551 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.925665 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.926565 4903 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a" exitCode=0 Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.926642 4903 scope.go:117] "RemoveContainer" containerID="48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.926664 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.935448 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9"} Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.936403 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.936582 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.936757 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.936918 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.937090 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.938402 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerStarted","Data":"4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05"} Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.939334 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.939568 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.939864 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.940309 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.940487 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.940676 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.942745 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjhpx" event={"ID":"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f","Type":"ContainerStarted","Data":"2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170"} Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.943431 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.944148 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.944511 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.944717 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.944872 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.945026 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.945173 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.945350 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.945619 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.945788 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.945954 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.946096 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.946238 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.946376 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.946512 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.946667 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.947914 4903 scope.go:117] "RemoveContainer" containerID="e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.961640 4903 scope.go:117] "RemoveContainer" containerID="4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.975615 4903 scope.go:117] "RemoveContainer" containerID="33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02" Jan 28 15:50:00 crc kubenswrapper[4903]: I0128 15:50:00.988188 4903 scope.go:117] "RemoveContainer" containerID="9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.004932 4903 scope.go:117] "RemoveContainer" containerID="4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.033715 4903 scope.go:117] "RemoveContainer" containerID="48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2" Jan 28 15:50:01 crc kubenswrapper[4903]: E0128 15:50:01.043122 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\": container with ID starting with 48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2 not found: ID does not exist" containerID="48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.043175 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2"} err="failed to get container status \"48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\": rpc error: code = NotFound desc = could not find container \"48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2\": container with ID starting with 48ce7b853b4cc11e27c6b5957cef3db45b1fbfba705228f04ae9ecfa6b30d5f2 not found: ID does not exist" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.043208 4903 scope.go:117] "RemoveContainer" containerID="e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e" Jan 28 15:50:01 crc kubenswrapper[4903]: E0128 15:50:01.043667 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\": container with ID starting with e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e not found: ID does not exist" containerID="e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.043699 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e"} err="failed to get container status \"e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\": rpc error: code = NotFound desc = could not find container \"e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e\": container with ID starting with e329ad9fd39ab0bafc9c880b3fdd4a96eccf6ed2c6f4d4a0f7fe4d186053059e not found: ID does not exist" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.043720 4903 scope.go:117] "RemoveContainer" containerID="4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a" Jan 28 15:50:01 crc kubenswrapper[4903]: E0128 15:50:01.043968 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\": container with ID starting with 4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a not found: ID does not exist" containerID="4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.043996 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a"} err="failed to get container status \"4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\": rpc error: code = NotFound desc = could not find container \"4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a\": container with ID starting with 4166948b19f563ff0929953084d2021bfa6dc14e6c6b6fb3e5cfe6be50af801a not found: ID does not exist" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.044015 4903 scope.go:117] "RemoveContainer" containerID="33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02" Jan 28 15:50:01 crc kubenswrapper[4903]: E0128 15:50:01.044336 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\": container with ID starting with 33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02 not found: ID does not exist" containerID="33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.044401 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02"} err="failed to get container status \"33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\": rpc error: code = NotFound desc = could not find container \"33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02\": container with ID starting with 33683548f51f9fb9f673b5414ab37422b888ae9c9b9554803f86f9794b412b02 not found: ID does not exist" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.044439 4903 scope.go:117] "RemoveContainer" containerID="9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a" Jan 28 15:50:01 crc kubenswrapper[4903]: E0128 15:50:01.045092 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\": container with ID starting with 9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a not found: ID does not exist" containerID="9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.045128 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a"} err="failed to get container status \"9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\": rpc error: code = NotFound desc = could not find container \"9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a\": container with ID starting with 9b06ca1551105e2cda51a72ecb1492f3d1ebd573640aff778d3bfe494694740a not found: ID does not exist" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.045151 4903 scope.go:117] "RemoveContainer" containerID="4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b" Jan 28 15:50:01 crc kubenswrapper[4903]: E0128 15:50:01.045406 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\": container with ID starting with 4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b not found: ID does not exist" containerID="4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.045434 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b"} err="failed to get container status \"4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\": rpc error: code = NotFound desc = could not find container \"4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b\": container with ID starting with 4a41621f9a96a252dbf0c8cb5595a1d54fde5e97ae8dc16443fc3a3ea6c8139b not found: ID does not exist" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.533060 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:50:01 crc kubenswrapper[4903]: I0128 15:50:01.533457 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:50:01 crc kubenswrapper[4903]: E0128 15:50:01.566651 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="3.2s" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.173879 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.174234 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.217787 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.218351 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.218860 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.219240 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.219717 4903 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.220150 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.220483 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.220720 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.221000 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.420643 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 15:50:02 crc kubenswrapper[4903]: I0128 15:50:02.572024 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gw84l" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="registry-server" probeResult="failure" output=< Jan 28 15:50:02 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 15:50:02 crc kubenswrapper[4903]: > Jan 28 15:50:04 crc kubenswrapper[4903]: I0128 15:50:04.754606 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:50:04 crc kubenswrapper[4903]: I0128 15:50:04.755067 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:50:04 crc kubenswrapper[4903]: E0128 15:50:04.767784 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="6.4s" Jan 28 15:50:05 crc kubenswrapper[4903]: I0128 15:50:05.142601 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:50:05 crc kubenswrapper[4903]: I0128 15:50:05.142676 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:50:05 crc kubenswrapper[4903]: I0128 15:50:05.799810 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r7vcv" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="registry-server" probeResult="failure" output=< Jan 28 15:50:05 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 15:50:05 crc kubenswrapper[4903]: > Jan 28 15:50:06 crc kubenswrapper[4903]: I0128 15:50:06.179827 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gjhpx" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="registry-server" probeResult="failure" output=< Jan 28 15:50:06 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 15:50:06 crc kubenswrapper[4903]: > Jan 28 15:50:06 crc kubenswrapper[4903]: E0128 15:50:06.331304 4903 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.251:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-gw84l.188eefd42c4b9426 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-gw84l,UID:97923485-cab3-4578-ae02-4489827d63ae,APIVersion:v1,ResourceVersion:28622,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 17.408s (17.408s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:49:58.158439462 +0000 UTC m=+270.434410973,LastTimestamp:2026-01-28 15:49:58.158439462 +0000 UTC m=+270.434410973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:50:08 crc kubenswrapper[4903]: I0128 15:50:08.418512 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:08 crc kubenswrapper[4903]: I0128 15:50:08.419702 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:08 crc kubenswrapper[4903]: I0128 15:50:08.419950 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:08 crc kubenswrapper[4903]: I0128 15:50:08.420157 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:08 crc kubenswrapper[4903]: I0128 15:50:08.420335 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:08 crc kubenswrapper[4903]: I0128 15:50:08.420503 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:08 crc kubenswrapper[4903]: I0128 15:50:08.420708 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:08 crc kubenswrapper[4903]: E0128 15:50:08.496830 4903 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.251:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" volumeName="registry-storage" Jan 28 15:50:09 crc kubenswrapper[4903]: E0128 15:50:09.854883 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:50:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:50:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:50:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:50:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:2c1439ebdda893daf377def2d4397762658d82b531bb83f7ae41a4e7f26d4407\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c044fa5dc076cb0fb053c5a676c39093e5fd06f6cc0eeaff8a747680c99c8b7f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675724519},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:68c28a690c4c3482a63d6de9cf3b80304e983243444eb4d2c5fcaf5c051eb54b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a273081c72178c20c79eca9b18dbb926d33a6bb826b215c14de6b31207e497ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:364f5956de22b63db7dad4fcdd1f2740f71a482026c15aa3e2abebfbc5bf2fd7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d3d262f90dd0f3c3f809b45f327ca086741a47f73e44560b04787609f0f99567\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:09 crc kubenswrapper[4903]: E0128 15:50:09.855230 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:09 crc kubenswrapper[4903]: E0128 15:50:09.855578 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:09 crc kubenswrapper[4903]: E0128 15:50:09.855890 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:09 crc kubenswrapper[4903]: E0128 15:50:09.856091 4903 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:09 crc kubenswrapper[4903]: E0128 15:50:09.856104 4903 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:50:11 crc kubenswrapper[4903]: E0128 15:50:11.169302 4903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="7s" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.607946 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.608770 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.609243 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.609714 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.610171 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.610535 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.610784 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.611090 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.675139 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.675854 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.676603 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.677458 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.677814 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.678149 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.678462 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.678713 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.998092 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.998160 4903 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009" exitCode=1 Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.998216 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009"} Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.998941 4903 scope.go:117] "RemoveContainer" containerID="9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.999236 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:11 crc kubenswrapper[4903]: I0128 15:50:11.999689 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.000104 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.000476 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.000969 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.001259 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.001612 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.001999 4903 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.231702 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.233254 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.233681 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.233954 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.234235 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.234550 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.234845 4903 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.235093 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:12 crc kubenswrapper[4903]: I0128 15:50:12.235321 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.006500 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.006608 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c09f97a19d28aeebd4e0de0c48c8b7ff6941cdfa4bd2cbbbab6988883aa72e96"} Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.007574 4903 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.008154 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.008703 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.009250 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.009730 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.010052 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.010443 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.010921 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.173242 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.173501 4903 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.173592 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.413326 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.414393 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.414866 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.415522 4903 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.416277 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.416800 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.417149 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.417427 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.417712 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.426840 4903 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.426868 4903 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:13 crc kubenswrapper[4903]: E0128 15:50:13.427163 4903 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:13 crc kubenswrapper[4903]: I0128 15:50:13.427629 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:13 crc kubenswrapper[4903]: W0128 15:50:13.449327 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-7d47c558633655c3d29e5c6b70c87772ce593d5c11af374e9fa50187124d6198 WatchSource:0}: Error finding container 7d47c558633655c3d29e5c6b70c87772ce593d5c11af374e9fa50187124d6198: Status 404 returned error can't find the container with id 7d47c558633655c3d29e5c6b70c87772ce593d5c11af374e9fa50187124d6198 Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.016035 4903 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c6bb32dad38f6c6477d44b6b344fa423fbedb1421fc37b851b3bb778cdc495d6" exitCode=0 Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.016130 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c6bb32dad38f6c6477d44b6b344fa423fbedb1421fc37b851b3bb778cdc495d6"} Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.016181 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7d47c558633655c3d29e5c6b70c87772ce593d5c11af374e9fa50187124d6198"} Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.016568 4903 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.016590 4903 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.016978 4903 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: E0128 15:50:14.017004 4903 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.017264 4903 status_manager.go:851] "Failed to get status for pod" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" pod="openshift-marketplace/certified-operators-6vjpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6vjpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.017553 4903 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.018010 4903 status_manager.go:851] "Failed to get status for pod" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" pod="openshift-marketplace/redhat-operators-r7vcv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-r7vcv\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.018238 4903 status_manager.go:851] "Failed to get status for pod" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" pod="openshift-marketplace/community-operators-4gbsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-4gbsl\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.018563 4903 status_manager.go:851] "Failed to get status for pod" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" pod="openshift-marketplace/redhat-operators-gjhpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gjhpx\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.018877 4903 status_manager.go:851] "Failed to get status for pod" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.019353 4903 status_manager.go:851] "Failed to get status for pod" podUID="97923485-cab3-4578-ae02-4489827d63ae" pod="openshift-marketplace/certified-operators-gw84l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gw84l\": dial tcp 38.102.83.251:6443: connect: connection refused" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.808770 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:50:14 crc kubenswrapper[4903]: I0128 15:50:14.852481 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:50:15 crc kubenswrapper[4903]: I0128 15:50:15.025812 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"82b7b636024ac0e032613871a812034ada52e812155d62787d3b163ad4c90b17"} Jan 28 15:50:15 crc kubenswrapper[4903]: I0128 15:50:15.025858 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1d70c33b44b17bd8f67f83e08502094b2003133cf9eb887268816560637af55b"} Jan 28 15:50:15 crc kubenswrapper[4903]: I0128 15:50:15.025872 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"823a410d1c72ad2bbbc8bf36e65aa0b177cfd69f395d68c591a5c8530193519a"} Jan 28 15:50:15 crc kubenswrapper[4903]: I0128 15:50:15.025897 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"71efa28f299cf6baf10bc2d5418a3285932386ecafa446394b0fac41f66c85e1"} Jan 28 15:50:15 crc kubenswrapper[4903]: I0128 15:50:15.186883 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:50:15 crc kubenswrapper[4903]: I0128 15:50:15.219097 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:50:16 crc kubenswrapper[4903]: I0128 15:50:16.035749 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f5cf11b6f4aa1fd9337d847d332d5b8191b3fec413243492e940b71ef57a7837"} Jan 28 15:50:16 crc kubenswrapper[4903]: I0128 15:50:16.036184 4903 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:16 crc kubenswrapper[4903]: I0128 15:50:16.036219 4903 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:18 crc kubenswrapper[4903]: I0128 15:50:18.428254 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:18 crc kubenswrapper[4903]: I0128 15:50:18.428520 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:18 crc kubenswrapper[4903]: I0128 15:50:18.434489 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:21 crc kubenswrapper[4903]: I0128 15:50:21.043241 4903 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:21 crc kubenswrapper[4903]: I0128 15:50:21.065237 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:21 crc kubenswrapper[4903]: I0128 15:50:21.065450 4903 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:21 crc kubenswrapper[4903]: I0128 15:50:21.065650 4903 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:21 crc kubenswrapper[4903]: I0128 15:50:21.068410 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:21 crc kubenswrapper[4903]: I0128 15:50:21.070336 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b727d9f-7392-48a0-bd2e-50a054b761f9" Jan 28 15:50:21 crc kubenswrapper[4903]: I0128 15:50:21.139699 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:50:22 crc kubenswrapper[4903]: I0128 15:50:22.070747 4903 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:22 crc kubenswrapper[4903]: I0128 15:50:22.071070 4903 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:23 crc kubenswrapper[4903]: I0128 15:50:23.076372 4903 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:23 crc kubenswrapper[4903]: I0128 15:50:23.076430 4903 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26c57e9a-4fd7-46bd-b562-afc490cb6bf2" Jan 28 15:50:23 crc kubenswrapper[4903]: I0128 15:50:23.174091 4903 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:50:23 crc kubenswrapper[4903]: I0128 15:50:23.174168 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:50:28 crc kubenswrapper[4903]: I0128 15:50:28.220123 4903 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 15:50:28 crc kubenswrapper[4903]: I0128 15:50:28.451183 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5b727d9f-7392-48a0-bd2e-50a054b761f9" Jan 28 15:50:30 crc kubenswrapper[4903]: I0128 15:50:30.351991 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:50:30 crc kubenswrapper[4903]: I0128 15:50:30.624893 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:50:30 crc kubenswrapper[4903]: I0128 15:50:30.783383 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.261622 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.399447 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.523244 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.632164 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.677650 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.816780 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.832695 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.845198 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:50:31 crc kubenswrapper[4903]: I0128 15:50:31.921300 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.015475 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.131081 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.382673 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.437123 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.482912 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.496382 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.707589 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:50:32 crc kubenswrapper[4903]: I0128 15:50:32.857831 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.112156 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.137948 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.173714 4903 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.173819 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.173959 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.175189 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c09f97a19d28aeebd4e0de0c48c8b7ff6941cdfa4bd2cbbbab6988883aa72e96"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.175462 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://c09f97a19d28aeebd4e0de0c48c8b7ff6941cdfa4bd2cbbbab6988883aa72e96" gracePeriod=30 Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.187108 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.242884 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.475897 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.553771 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.585516 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.599400 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.656637 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:50:33 crc kubenswrapper[4903]: I0128 15:50:33.753959 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.014905 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.097335 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.126369 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.158864 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.254423 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.286801 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.372072 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.428756 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.475975 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.478304 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.492018 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.689485 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.756360 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.770510 4903 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.815891 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.827593 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:50:34 crc kubenswrapper[4903]: I0128 15:50:34.969235 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.028896 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.086709 4903 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.104826 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.154408 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.190829 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.430439 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.444957 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.486669 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.523981 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.545963 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.641319 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.722677 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.732237 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.848718 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.894871 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:50:35 crc kubenswrapper[4903]: I0128 15:50:35.938617 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.091066 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.205397 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.249668 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.298329 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.390201 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.448516 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.554968 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.615158 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.626160 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.646440 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.675655 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.739709 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.742868 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.782064 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.831380 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.864782 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.878678 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.919795 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:50:36 crc kubenswrapper[4903]: I0128 15:50:36.983615 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.012617 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.086006 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.140861 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.164811 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.207188 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.254639 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.357654 4903 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.611144 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.643552 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.656619 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.695255 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.709553 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.718813 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.726355 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.741827 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.767807 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.840889 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.914835 4903 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.916433 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.926923 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.972715 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:50:37 crc kubenswrapper[4903]: I0128 15:50:37.983654 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.053825 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.178322 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.318596 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.412616 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.525754 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.546739 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.696793 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.733233 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.799106 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.901754 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:50:38 crc kubenswrapper[4903]: I0128 15:50:38.932062 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.009257 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.277072 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.376440 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.386992 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.407384 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.451910 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.494895 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.495202 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.505167 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.542924 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.550648 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.745671 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.796481 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.807838 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.903220 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:50:39 crc kubenswrapper[4903]: I0128 15:50:39.924000 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.129559 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.235114 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.293470 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.299924 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.334930 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.353128 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.356007 4903 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.357191 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r7vcv" podStartSLOduration=46.393329551 podStartE2EDuration="2m26.357173336s" podCreationTimestamp="2026-01-28 15:48:14 +0000 UTC" firstStartedPulling="2026-01-28 15:48:17.120196944 +0000 UTC m=+169.396168445" lastFinishedPulling="2026-01-28 15:49:57.084040719 +0000 UTC m=+269.360012230" observedRunningTime="2026-01-28 15:50:20.748820193 +0000 UTC m=+293.024791704" watchObservedRunningTime="2026-01-28 15:50:40.357173336 +0000 UTC m=+312.633144847" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.358094 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.358088172 podStartE2EDuration="43.358088172s" podCreationTimestamp="2026-01-28 15:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:50:20.735392908 +0000 UTC m=+293.011364419" watchObservedRunningTime="2026-01-28 15:50:40.358088172 +0000 UTC m=+312.634059673" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.358888 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gw84l" podStartSLOduration=46.288250779 podStartE2EDuration="2m29.358880745s" podCreationTimestamp="2026-01-28 15:48:11 +0000 UTC" firstStartedPulling="2026-01-28 15:48:15.087798035 +0000 UTC m=+167.363769546" lastFinishedPulling="2026-01-28 15:49:58.158428001 +0000 UTC m=+270.434399512" observedRunningTime="2026-01-28 15:50:20.694268372 +0000 UTC m=+292.970239883" watchObservedRunningTime="2026-01-28 15:50:40.358880745 +0000 UTC m=+312.634852256" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.358977 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gjhpx" podStartSLOduration=45.754464777 podStartE2EDuration="2m26.358971388s" podCreationTimestamp="2026-01-28 15:48:14 +0000 UTC" firstStartedPulling="2026-01-28 15:48:17.110469927 +0000 UTC m=+169.386441448" lastFinishedPulling="2026-01-28 15:49:57.714976548 +0000 UTC m=+269.990948059" observedRunningTime="2026-01-28 15:50:20.774079406 +0000 UTC m=+293.050050917" watchObservedRunningTime="2026-01-28 15:50:40.358971388 +0000 UTC m=+312.634942909" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.359828 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gbsl" podStartSLOduration=50.641531928 podStartE2EDuration="2m29.359821701s" podCreationTimestamp="2026-01-28 15:48:11 +0000 UTC" firstStartedPulling="2026-01-28 15:48:16.101859799 +0000 UTC m=+168.377831310" lastFinishedPulling="2026-01-28 15:49:54.820149552 +0000 UTC m=+267.096121083" observedRunningTime="2026-01-28 15:50:20.761377162 +0000 UTC m=+293.037348673" watchObservedRunningTime="2026-01-28 15:50:40.359821701 +0000 UTC m=+312.635793222" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.360888 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/certified-operators-6vjpx"] Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.360930 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.365880 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.377692 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.377675172 podStartE2EDuration="19.377675172s" podCreationTimestamp="2026-01-28 15:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:50:40.377116677 +0000 UTC m=+312.653088208" watchObservedRunningTime="2026-01-28 15:50:40.377675172 +0000 UTC m=+312.653646683" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.420567 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="202a5ad3-47a0-47cb-89fe-c01d2356e38f" path="/var/lib/kubelet/pods/202a5ad3-47a0-47cb-89fe-c01d2356e38f/volumes" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.452628 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.467934 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.470148 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.515708 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.545752 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.584001 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.586078 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.647673 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.683609 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.773387 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.793217 4903 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.796151 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.843861 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.855689 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.891285 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.903451 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:50:40 crc kubenswrapper[4903]: I0128 15:50:40.907199 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.040056 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.059803 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.074974 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.115825 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.123824 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.237270 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.253339 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.261447 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.378294 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.437544 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.443155 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.450685 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.463577 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.517559 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.552633 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.559350 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.572288 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.580485 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.601075 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.663204 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.680757 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.717929 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.725444 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.798035 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.869140 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.913630 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.972781 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:50:41 crc kubenswrapper[4903]: I0128 15:50:41.988405 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.106925 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.137972 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.207885 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.247492 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.296403 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.305872 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.307442 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.318445 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.444963 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.483992 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.533220 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.554269 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.673323 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.741230 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.807765 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.847959 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.876266 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:50:42 crc kubenswrapper[4903]: I0128 15:50:42.999481 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.025710 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.051579 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.059719 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.218225 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.220936 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.227348 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.237218 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.285748 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.307625 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.326508 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.338154 4903 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.338426 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9" gracePeriod=5 Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.369745 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.488197 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.513754 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.591621 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:50:43 crc kubenswrapper[4903]: I0128 15:50:43.741925 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.158085 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.220854 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.248697 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.516694 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.539013 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.558548 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.796272 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.825563 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:50:44 crc kubenswrapper[4903]: I0128 15:50:44.859519 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.075007 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.264382 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.271741 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.383895 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.618989 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.647355 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.772140 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.820134 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:50:45 crc kubenswrapper[4903]: I0128 15:50:45.889651 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:50:46 crc kubenswrapper[4903]: I0128 15:50:46.106185 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:50:46 crc kubenswrapper[4903]: I0128 15:50:46.306940 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:50:46 crc kubenswrapper[4903]: I0128 15:50:46.365940 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:50:46 crc kubenswrapper[4903]: I0128 15:50:46.533726 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:50:46 crc kubenswrapper[4903]: I0128 15:50:46.561286 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:50:46 crc kubenswrapper[4903]: I0128 15:50:46.698385 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.495964 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.496295 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.557783 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.557901 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.557932 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.557930 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558088 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558082 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558128 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558176 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558277 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558659 4903 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558677 4903 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558691 4903 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.558703 4903 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.566782 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:50:48 crc kubenswrapper[4903]: I0128 15:50:48.660362 4903 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.240460 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.241023 4903 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9" exitCode=137 Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.241091 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.241112 4903 scope.go:117] "RemoveContainer" containerID="91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9" Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.261974 4903 scope.go:117] "RemoveContainer" containerID="91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9" Jan 28 15:50:49 crc kubenswrapper[4903]: E0128 15:50:49.262819 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9\": container with ID starting with 91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9 not found: ID does not exist" containerID="91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9" Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.262853 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9"} err="failed to get container status \"91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9\": rpc error: code = NotFound desc = could not find container \"91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9\": container with ID starting with 91db589e96135354cbf31382ade1069f5553036cbd1e1de632bdacb3b72dcfb9 not found: ID does not exist" Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.547492 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw84l"] Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.547774 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gw84l" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="registry-server" containerID="cri-o://4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05" gracePeriod=30 Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.549462 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gbsl"] Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.549748 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4gbsl" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="registry-server" containerID="cri-o://17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c" gracePeriod=30 Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.559588 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-954mb"] Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.559829 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-954mb" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="registry-server" containerID="cri-o://d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f" gracePeriod=30 Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.563308 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fp7dl"] Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.563440 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" containerID="cri-o://4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0" gracePeriod=30 Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.584460 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87z2"] Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.584842 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j87z2" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="registry-server" containerID="cri-o://88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d" gracePeriod=30 Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.591593 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjhpx"] Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.592336 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gjhpx" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="registry-server" containerID="cri-o://2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170" gracePeriod=30 Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.602214 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r7vcv"] Jan 28 15:50:49 crc kubenswrapper[4903]: I0128 15:50:49.602594 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r7vcv" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="registry-server" containerID="cri-o://4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92" gracePeriod=30 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.102162 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.110014 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.112437 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.119172 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.133692 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.140744 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.146457 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.188863 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-operator-metrics\") pod \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.189295 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-catalog-content\") pod \"97923485-cab3-4578-ae02-4489827d63ae\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.191150 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v7vn\" (UniqueName: \"kubernetes.io/projected/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-kube-api-access-2v7vn\") pod \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.191566 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-utilities\") pod \"1c684124-cb30-4db3-9ece-fd4baa23a639\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.191736 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjdhs\" (UniqueName: \"kubernetes.io/projected/6f6c4494-66ec-40c7-960f-0ab4558af7d8-kube-api-access-mjdhs\") pod \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.191836 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-catalog-content\") pod \"8bd3dd6e-5429-4193-8531-6ba1b357358f\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.191968 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-catalog-content\") pod \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.192511 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-trusted-ca\") pod \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\" (UID: \"a35915fe-4b5b-4c69-8abb-2d2d22e423c5\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.193056 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6lj2\" (UniqueName: \"kubernetes.io/projected/1c684124-cb30-4db3-9ece-fd4baa23a639-kube-api-access-x6lj2\") pod \"1c684124-cb30-4db3-9ece-fd4baa23a639\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.193330 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9mjh\" (UniqueName: \"kubernetes.io/projected/8bd3dd6e-5429-4193-8531-6ba1b357358f-kube-api-access-b9mjh\") pod \"8bd3dd6e-5429-4193-8531-6ba1b357358f\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.193797 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndlzt\" (UniqueName: \"kubernetes.io/projected/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-kube-api-access-ndlzt\") pod \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.208714 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-utilities\") pod \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\" (UID: \"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.209111 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqvm5\" (UniqueName: \"kubernetes.io/projected/97923485-cab3-4578-ae02-4489827d63ae-kube-api-access-rqvm5\") pod \"97923485-cab3-4578-ae02-4489827d63ae\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.209161 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-utilities\") pod \"8bd3dd6e-5429-4193-8531-6ba1b357358f\" (UID: \"8bd3dd6e-5429-4193-8531-6ba1b357358f\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.209288 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-catalog-content\") pod \"1c684124-cb30-4db3-9ece-fd4baa23a639\" (UID: \"1c684124-cb30-4db3-9ece-fd4baa23a639\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.209347 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-utilities\") pod \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.209470 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-catalog-content\") pod \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\" (UID: \"6f6c4494-66ec-40c7-960f-0ab4558af7d8\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.209705 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-utilities\") pod \"97923485-cab3-4578-ae02-4489827d63ae\" (UID: \"97923485-cab3-4578-ae02-4489827d63ae\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.195074 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a35915fe-4b5b-4c69-8abb-2d2d22e423c5" (UID: "a35915fe-4b5b-4c69-8abb-2d2d22e423c5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.196260 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-utilities" (OuterVolumeSpecName: "utilities") pod "1c684124-cb30-4db3-9ece-fd4baa23a639" (UID: "1c684124-cb30-4db3-9ece-fd4baa23a639"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.196446 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-kube-api-access-2v7vn" (OuterVolumeSpecName: "kube-api-access-2v7vn") pod "a35915fe-4b5b-4c69-8abb-2d2d22e423c5" (UID: "a35915fe-4b5b-4c69-8abb-2d2d22e423c5"). InnerVolumeSpecName "kube-api-access-2v7vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.197636 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c684124-cb30-4db3-9ece-fd4baa23a639-kube-api-access-x6lj2" (OuterVolumeSpecName: "kube-api-access-x6lj2") pod "1c684124-cb30-4db3-9ece-fd4baa23a639" (UID: "1c684124-cb30-4db3-9ece-fd4baa23a639"). InnerVolumeSpecName "kube-api-access-x6lj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.197874 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6c4494-66ec-40c7-960f-0ab4558af7d8-kube-api-access-mjdhs" (OuterVolumeSpecName: "kube-api-access-mjdhs") pod "6f6c4494-66ec-40c7-960f-0ab4558af7d8" (UID: "6f6c4494-66ec-40c7-960f-0ab4558af7d8"). InnerVolumeSpecName "kube-api-access-mjdhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.198232 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a35915fe-4b5b-4c69-8abb-2d2d22e423c5" (UID: "a35915fe-4b5b-4c69-8abb-2d2d22e423c5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.198475 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-kube-api-access-ndlzt" (OuterVolumeSpecName: "kube-api-access-ndlzt") pod "ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" (UID: "ce40a6c4-bba4-43dc-8aa7-3a63fd44447f"). InnerVolumeSpecName "kube-api-access-ndlzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.199828 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd3dd6e-5429-4193-8531-6ba1b357358f-kube-api-access-b9mjh" (OuterVolumeSpecName: "kube-api-access-b9mjh") pod "8bd3dd6e-5429-4193-8531-6ba1b357358f" (UID: "8bd3dd6e-5429-4193-8531-6ba1b357358f"). InnerVolumeSpecName "kube-api-access-b9mjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.209606 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-utilities" (OuterVolumeSpecName: "utilities") pod "ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" (UID: "ce40a6c4-bba4-43dc-8aa7-3a63fd44447f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210328 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-utilities" (OuterVolumeSpecName: "utilities") pod "8bd3dd6e-5429-4193-8531-6ba1b357358f" (UID: "8bd3dd6e-5429-4193-8531-6ba1b357358f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210406 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-utilities" (OuterVolumeSpecName: "utilities") pod "6f6c4494-66ec-40c7-960f-0ab4558af7d8" (UID: "6f6c4494-66ec-40c7-960f-0ab4558af7d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210713 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6lj2\" (UniqueName: \"kubernetes.io/projected/1c684124-cb30-4db3-9ece-fd4baa23a639-kube-api-access-x6lj2\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210742 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9mjh\" (UniqueName: \"kubernetes.io/projected/8bd3dd6e-5429-4193-8531-6ba1b357358f-kube-api-access-b9mjh\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210752 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndlzt\" (UniqueName: \"kubernetes.io/projected/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-kube-api-access-ndlzt\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210763 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210772 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210780 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210792 4903 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210801 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210809 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v7vn\" (UniqueName: \"kubernetes.io/projected/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-kube-api-access-2v7vn\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210817 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjdhs\" (UniqueName: \"kubernetes.io/projected/6f6c4494-66ec-40c7-960f-0ab4558af7d8-kube-api-access-mjdhs\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.210826 4903 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a35915fe-4b5b-4c69-8abb-2d2d22e423c5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.211401 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-utilities" (OuterVolumeSpecName: "utilities") pod "97923485-cab3-4578-ae02-4489827d63ae" (UID: "97923485-cab3-4578-ae02-4489827d63ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.215898 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97923485-cab3-4578-ae02-4489827d63ae-kube-api-access-rqvm5" (OuterVolumeSpecName: "kube-api-access-rqvm5") pod "97923485-cab3-4578-ae02-4489827d63ae" (UID: "97923485-cab3-4578-ae02-4489827d63ae"). InnerVolumeSpecName "kube-api-access-rqvm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.240914 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f6c4494-66ec-40c7-960f-0ab4558af7d8" (UID: "6f6c4494-66ec-40c7-960f-0ab4558af7d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.248585 4903 generic.go:334] "Generic (PLEG): container finished" podID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerID="2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170" exitCode=0 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.248651 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjhpx" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.248664 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjhpx" event={"ID":"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f","Type":"ContainerDied","Data":"2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.248732 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjhpx" event={"ID":"ce40a6c4-bba4-43dc-8aa7-3a63fd44447f","Type":"ContainerDied","Data":"f471e04c52ac0a9dfc39d5af627b2d58b931174ab36cd8c5d0d2b35c6f095da6"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.248753 4903 scope.go:117] "RemoveContainer" containerID="2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.252486 4903 generic.go:334] "Generic (PLEG): container finished" podID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerID="4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92" exitCode=0 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.252606 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r7vcv" event={"ID":"f3e140f0-9bf3-4817-af15-b215b941ba85","Type":"ContainerDied","Data":"4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.252601 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r7vcv" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.252637 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r7vcv" event={"ID":"f3e140f0-9bf3-4817-af15-b215b941ba85","Type":"ContainerDied","Data":"cdd5d25adbc67c076f801e199473f4a830e38020b61ff0814a644ffa985989ac"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.255154 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97923485-cab3-4578-ae02-4489827d63ae" (UID: "97923485-cab3-4578-ae02-4489827d63ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.256710 4903 generic.go:334] "Generic (PLEG): container finished" podID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerID="d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f" exitCode=0 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.256761 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-954mb" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.256792 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-954mb" event={"ID":"8bd3dd6e-5429-4193-8531-6ba1b357358f","Type":"ContainerDied","Data":"d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.256825 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-954mb" event={"ID":"8bd3dd6e-5429-4193-8531-6ba1b357358f","Type":"ContainerDied","Data":"56395b3a79547182ee46a74c0c4d8b41376c63666ca8d4836c0a63df7f7ce775"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.261108 4903 generic.go:334] "Generic (PLEG): container finished" podID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerID="4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0" exitCode=0 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.261167 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.261182 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" event={"ID":"a35915fe-4b5b-4c69-8abb-2d2d22e423c5","Type":"ContainerDied","Data":"4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.261466 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" event={"ID":"a35915fe-4b5b-4c69-8abb-2d2d22e423c5","Type":"ContainerDied","Data":"7b07c8f9732fca4477a46b9230c9387110de525bb3f6a17e5a2702ce9d6cb269"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.263485 4903 generic.go:334] "Generic (PLEG): container finished" podID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerID="17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c" exitCode=0 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.263584 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbsl" event={"ID":"1c684124-cb30-4db3-9ece-fd4baa23a639","Type":"ContainerDied","Data":"17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.263624 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gbsl" event={"ID":"1c684124-cb30-4db3-9ece-fd4baa23a639","Type":"ContainerDied","Data":"4d531ac4a50bb1d02c6568f5e70a6b9e5485c889f66ab117076d21b334ecc9f5"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.263971 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bd3dd6e-5429-4193-8531-6ba1b357358f" (UID: "8bd3dd6e-5429-4193-8531-6ba1b357358f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.264105 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gbsl" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.267788 4903 scope.go:117] "RemoveContainer" containerID="4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.269392 4903 generic.go:334] "Generic (PLEG): container finished" podID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerID="88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d" exitCode=0 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.269461 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87z2" event={"ID":"6f6c4494-66ec-40c7-960f-0ab4558af7d8","Type":"ContainerDied","Data":"88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.269544 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j87z2" event={"ID":"6f6c4494-66ec-40c7-960f-0ab4558af7d8","Type":"ContainerDied","Data":"1c7fffc91456d9173ae16e69ac18b8c053f6a80217fda8fa0cc63778c9534d9f"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.269568 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j87z2" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.274654 4903 generic.go:334] "Generic (PLEG): container finished" podID="97923485-cab3-4578-ae02-4489827d63ae" containerID="4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05" exitCode=0 Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.274694 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerDied","Data":"4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.274706 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw84l" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.274723 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw84l" event={"ID":"97923485-cab3-4578-ae02-4489827d63ae","Type":"ContainerDied","Data":"165de5b92a810e954ad0db435277356bbb9f0756a38b2c835778fd89b9b4fb80"} Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.293229 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c684124-cb30-4db3-9ece-fd4baa23a639" (UID: "1c684124-cb30-4db3-9ece-fd4baa23a639"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.297784 4903 scope.go:117] "RemoveContainer" containerID="7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.302707 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87z2"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.308309 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j87z2"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.311585 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkrb7\" (UniqueName: \"kubernetes.io/projected/f3e140f0-9bf3-4817-af15-b215b941ba85-kube-api-access-tkrb7\") pod \"f3e140f0-9bf3-4817-af15-b215b941ba85\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.312125 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-utilities\") pod \"f3e140f0-9bf3-4817-af15-b215b941ba85\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.312448 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-catalog-content\") pod \"f3e140f0-9bf3-4817-af15-b215b941ba85\" (UID: \"f3e140f0-9bf3-4817-af15-b215b941ba85\") " Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.314168 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-utilities" (OuterVolumeSpecName: "utilities") pod "f3e140f0-9bf3-4817-af15-b215b941ba85" (UID: "f3e140f0-9bf3-4817-af15-b215b941ba85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.314804 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f6c4494-66ec-40c7-960f-0ab4558af7d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.314955 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.315093 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.316723 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97923485-cab3-4578-ae02-4489827d63ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.316895 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bd3dd6e-5429-4193-8531-6ba1b357358f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.317041 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqvm5\" (UniqueName: \"kubernetes.io/projected/97923485-cab3-4578-ae02-4489827d63ae-kube-api-access-rqvm5\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.317141 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c684124-cb30-4db3-9ece-fd4baa23a639-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.318319 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fp7dl"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.322733 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fp7dl"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.325810 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw84l"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.327827 4903 scope.go:117] "RemoveContainer" containerID="2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.328389 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170\": container with ID starting with 2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170 not found: ID does not exist" containerID="2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.328437 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170"} err="failed to get container status \"2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170\": rpc error: code = NotFound desc = could not find container \"2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170\": container with ID starting with 2b7d1220018b5b282dfc399277d8bffd8dca8867a0d859a34e13898960d2c170 not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.328462 4903 scope.go:117] "RemoveContainer" containerID="4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.328870 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe\": container with ID starting with 4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe not found: ID does not exist" containerID="4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.328891 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe"} err="failed to get container status \"4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe\": rpc error: code = NotFound desc = could not find container \"4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe\": container with ID starting with 4a71ec332710c2aa6cfa3d8b261bfb3479125126ad235402e7490de12058fdfe not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.328903 4903 scope.go:117] "RemoveContainer" containerID="7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.328896 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gw84l"] Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.329156 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc\": container with ID starting with 7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc not found: ID does not exist" containerID="7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.329261 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc"} err="failed to get container status \"7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc\": rpc error: code = NotFound desc = could not find container \"7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc\": container with ID starting with 7d26d13afa417017dfe5b0848417551be77f61b852211e72f1ced49337daeecc not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.329331 4903 scope.go:117] "RemoveContainer" containerID="4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.339124 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e140f0-9bf3-4817-af15-b215b941ba85-kube-api-access-tkrb7" (OuterVolumeSpecName: "kube-api-access-tkrb7") pod "f3e140f0-9bf3-4817-af15-b215b941ba85" (UID: "f3e140f0-9bf3-4817-af15-b215b941ba85"). InnerVolumeSpecName "kube-api-access-tkrb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.341855 4903 scope.go:117] "RemoveContainer" containerID="a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.358790 4903 scope.go:117] "RemoveContainer" containerID="00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.361878 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" (UID: "ce40a6c4-bba4-43dc-8aa7-3a63fd44447f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.376368 4903 scope.go:117] "RemoveContainer" containerID="4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.376803 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92\": container with ID starting with 4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92 not found: ID does not exist" containerID="4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.376843 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92"} err="failed to get container status \"4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92\": rpc error: code = NotFound desc = could not find container \"4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92\": container with ID starting with 4d29e5e03470af7b037cc8e8e0871ecff3bb487894dcbdf703679a06c0556e92 not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.376867 4903 scope.go:117] "RemoveContainer" containerID="a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.378798 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8\": container with ID starting with a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8 not found: ID does not exist" containerID="a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.378922 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8"} err="failed to get container status \"a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8\": rpc error: code = NotFound desc = could not find container \"a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8\": container with ID starting with a03965ead5bd762cec97a96c03c7bd1e4d2bdcc188338671939847ed214f08c8 not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.378975 4903 scope.go:117] "RemoveContainer" containerID="00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.379567 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693\": container with ID starting with 00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693 not found: ID does not exist" containerID="00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.379646 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693"} err="failed to get container status \"00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693\": rpc error: code = NotFound desc = could not find container \"00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693\": container with ID starting with 00fac5876b895bdd39734cc339baad741f4ba20ea7d4188065686d078f745693 not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.379697 4903 scope.go:117] "RemoveContainer" containerID="d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.393453 4903 scope.go:117] "RemoveContainer" containerID="0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.407952 4903 scope.go:117] "RemoveContainer" containerID="795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.418094 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkrb7\" (UniqueName: \"kubernetes.io/projected/f3e140f0-9bf3-4817-af15-b215b941ba85-kube-api-access-tkrb7\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.418127 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.418425 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" path="/var/lib/kubelet/pods/6f6c4494-66ec-40c7-960f-0ab4558af7d8/volumes" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.419560 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97923485-cab3-4578-ae02-4489827d63ae" path="/var/lib/kubelet/pods/97923485-cab3-4578-ae02-4489827d63ae/volumes" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.420234 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" path="/var/lib/kubelet/pods/a35915fe-4b5b-4c69-8abb-2d2d22e423c5/volumes" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.421215 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.421410 4903 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.426892 4903 scope.go:117] "RemoveContainer" containerID="d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.427633 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f\": container with ID starting with d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f not found: ID does not exist" containerID="d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.427761 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f"} err="failed to get container status \"d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f\": rpc error: code = NotFound desc = could not find container \"d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f\": container with ID starting with d43012c03fa01bde66f797c48c2b148e34c1988b997eff7c586305369b11f93f not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.427891 4903 scope.go:117] "RemoveContainer" containerID="0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.433544 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f\": container with ID starting with 0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f not found: ID does not exist" containerID="0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.433602 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f"} err="failed to get container status \"0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f\": rpc error: code = NotFound desc = could not find container \"0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f\": container with ID starting with 0f3e0d8f3b40bca3daa288a39b53b38492368443e81e7b584024f3991289630f not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.433640 4903 scope.go:117] "RemoveContainer" containerID="795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.434196 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c\": container with ID starting with 795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c not found: ID does not exist" containerID="795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.434246 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c"} err="failed to get container status \"795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c\": rpc error: code = NotFound desc = could not find container \"795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c\": container with ID starting with 795ed736b5d2957db0f722b7c002eb5948e4b8cf5ca5a66cb25f0b404e4fd73c not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.434277 4903 scope.go:117] "RemoveContainer" containerID="4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.434708 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.435775 4903 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="be404d83-2ac9-4bb0-878d-95da97436fdd" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.437688 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.437732 4903 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="be404d83-2ac9-4bb0-878d-95da97436fdd" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.446610 4903 scope.go:117] "RemoveContainer" containerID="4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.450118 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0\": container with ID starting with 4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0 not found: ID does not exist" containerID="4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.450154 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0"} err="failed to get container status \"4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0\": rpc error: code = NotFound desc = could not find container \"4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0\": container with ID starting with 4818a3eef18a8d046fe6204e336c8005de7e5fee5221edad861111655e4a0ad0 not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.450180 4903 scope.go:117] "RemoveContainer" containerID="17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.458099 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3e140f0-9bf3-4817-af15-b215b941ba85" (UID: "f3e140f0-9bf3-4817-af15-b215b941ba85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.469340 4903 scope.go:117] "RemoveContainer" containerID="a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.495549 4903 scope.go:117] "RemoveContainer" containerID="bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.509081 4903 scope.go:117] "RemoveContainer" containerID="17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.509743 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c\": container with ID starting with 17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c not found: ID does not exist" containerID="17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.509796 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c"} err="failed to get container status \"17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c\": rpc error: code = NotFound desc = could not find container \"17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c\": container with ID starting with 17b5b5b5ccd2652b4a52bd84d8c01f92ea9fb340cc7a63b22f89b574005c762c not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.509818 4903 scope.go:117] "RemoveContainer" containerID="a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.510096 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd\": container with ID starting with a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd not found: ID does not exist" containerID="a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.510119 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd"} err="failed to get container status \"a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd\": rpc error: code = NotFound desc = could not find container \"a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd\": container with ID starting with a815f83b5e800fc04f179bf1e4798c4f69952e83ac6db88a53b9d2c177c785fd not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.510135 4903 scope.go:117] "RemoveContainer" containerID="bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.510453 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a\": container with ID starting with bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a not found: ID does not exist" containerID="bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.510491 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a"} err="failed to get container status \"bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a\": rpc error: code = NotFound desc = could not find container \"bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a\": container with ID starting with bf0684e7d4fd9259a8636b0c09cceb772d46e945f89919b0ac09fabfce0f267a not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.510520 4903 scope.go:117] "RemoveContainer" containerID="88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.519567 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e140f0-9bf3-4817-af15-b215b941ba85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.522243 4903 scope.go:117] "RemoveContainer" containerID="17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.535145 4903 scope.go:117] "RemoveContainer" containerID="b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.546065 4903 scope.go:117] "RemoveContainer" containerID="88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.546367 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d\": container with ID starting with 88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d not found: ID does not exist" containerID="88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.546411 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d"} err="failed to get container status \"88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d\": rpc error: code = NotFound desc = could not find container \"88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d\": container with ID starting with 88cb5acc0e2aeae6be51e6b9e2aa8687040aeb804929054cc7d283c9b332888d not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.546439 4903 scope.go:117] "RemoveContainer" containerID="17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.546709 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d\": container with ID starting with 17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d not found: ID does not exist" containerID="17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.546745 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d"} err="failed to get container status \"17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d\": rpc error: code = NotFound desc = could not find container \"17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d\": container with ID starting with 17c3017ab76f383765a7b28ba885d105557ea11f4c64a857bc38cf5b2a39e18d not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.546790 4903 scope.go:117] "RemoveContainer" containerID="b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.547032 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f\": container with ID starting with b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f not found: ID does not exist" containerID="b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.547061 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f"} err="failed to get container status \"b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f\": rpc error: code = NotFound desc = could not find container \"b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f\": container with ID starting with b81520875b3f0ab0262bbeb6ba26c73b1f0cacf6f859535c188227d00105229f not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.547078 4903 scope.go:117] "RemoveContainer" containerID="4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.571980 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjhpx"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.574506 4903 scope.go:117] "RemoveContainer" containerID="2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.578297 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gjhpx"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.588665 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-954mb"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.595280 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-954mb"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.606370 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gbsl"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.609882 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4gbsl"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.610295 4903 scope.go:117] "RemoveContainer" containerID="23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.613036 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r7vcv"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.615611 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r7vcv"] Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.622090 4903 scope.go:117] "RemoveContainer" containerID="4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.622445 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05\": container with ID starting with 4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05 not found: ID does not exist" containerID="4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.622483 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05"} err="failed to get container status \"4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05\": rpc error: code = NotFound desc = could not find container \"4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05\": container with ID starting with 4a688672f9d7373453c28555b67a8454350f29dbf7e394328902ccd52fd09a05 not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.622507 4903 scope.go:117] "RemoveContainer" containerID="2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.622786 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1\": container with ID starting with 2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1 not found: ID does not exist" containerID="2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.622811 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1"} err="failed to get container status \"2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1\": rpc error: code = NotFound desc = could not find container \"2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1\": container with ID starting with 2e95212043c13f409f70642e9e7dba88c2dc268045f06d33cf49d3126e0158b1 not found: ID does not exist" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.622824 4903 scope.go:117] "RemoveContainer" containerID="23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013" Jan 28 15:50:50 crc kubenswrapper[4903]: E0128 15:50:50.623085 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013\": container with ID starting with 23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013 not found: ID does not exist" containerID="23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013" Jan 28 15:50:50 crc kubenswrapper[4903]: I0128 15:50:50.623110 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013"} err="failed to get container status \"23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013\": rpc error: code = NotFound desc = could not find container \"23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013\": container with ID starting with 23219f7e45634f7d883014632f74cf4ae6ca5c27fd9b6bb3f93578488e51c013 not found: ID does not exist" Jan 28 15:50:51 crc kubenswrapper[4903]: I0128 15:50:51.018746 4903 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fp7dl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:50:51 crc kubenswrapper[4903]: I0128 15:50:51.018822 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fp7dl" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:50:52 crc kubenswrapper[4903]: I0128 15:50:52.421479 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" path="/var/lib/kubelet/pods/1c684124-cb30-4db3-9ece-fd4baa23a639/volumes" Jan 28 15:50:52 crc kubenswrapper[4903]: I0128 15:50:52.422379 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" path="/var/lib/kubelet/pods/8bd3dd6e-5429-4193-8531-6ba1b357358f/volumes" Jan 28 15:50:52 crc kubenswrapper[4903]: I0128 15:50:52.423199 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" path="/var/lib/kubelet/pods/ce40a6c4-bba4-43dc-8aa7-3a63fd44447f/volumes" Jan 28 15:50:52 crc kubenswrapper[4903]: I0128 15:50:52.424230 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" path="/var/lib/kubelet/pods/f3e140f0-9bf3-4817-af15-b215b941ba85/volumes" Jan 28 15:51:03 crc kubenswrapper[4903]: I0128 15:51:03.354631 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 15:51:03 crc kubenswrapper[4903]: I0128 15:51:03.357935 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:51:03 crc kubenswrapper[4903]: I0128 15:51:03.358009 4903 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c09f97a19d28aeebd4e0de0c48c8b7ff6941cdfa4bd2cbbbab6988883aa72e96" exitCode=137 Jan 28 15:51:03 crc kubenswrapper[4903]: I0128 15:51:03.358056 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c09f97a19d28aeebd4e0de0c48c8b7ff6941cdfa4bd2cbbbab6988883aa72e96"} Jan 28 15:51:03 crc kubenswrapper[4903]: I0128 15:51:03.358102 4903 scope.go:117] "RemoveContainer" containerID="9d3339f7e639357cd6305f449fe99aaea5dfad0d30e392c2f811103a5fac9009" Jan 28 15:51:03 crc kubenswrapper[4903]: I0128 15:51:03.952814 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:51:04 crc kubenswrapper[4903]: I0128 15:51:04.365015 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 15:51:04 crc kubenswrapper[4903]: I0128 15:51:04.367058 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b3cdada7dde739ed56c15cf103c73e2580482165cc16a8fee8e6029b4da4710"} Jan 28 15:51:05 crc kubenswrapper[4903]: I0128 15:51:05.713158 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:51:06 crc kubenswrapper[4903]: I0128 15:51:06.997350 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:51:09 crc kubenswrapper[4903]: I0128 15:51:09.176423 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:51:09 crc kubenswrapper[4903]: I0128 15:51:09.627471 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:51:10 crc kubenswrapper[4903]: I0128 15:51:10.522599 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:51:11 crc kubenswrapper[4903]: I0128 15:51:11.140392 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:51:13 crc kubenswrapper[4903]: I0128 15:51:13.173306 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:51:13 crc kubenswrapper[4903]: I0128 15:51:13.177129 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:51:13 crc kubenswrapper[4903]: I0128 15:51:13.420725 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:51:14 crc kubenswrapper[4903]: I0128 15:51:14.487110 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:51:14 crc kubenswrapper[4903]: I0128 15:51:14.900687 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:51:15 crc kubenswrapper[4903]: I0128 15:51:15.689186 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:51:18 crc kubenswrapper[4903]: I0128 15:51:18.415366 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200044 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s82l8"] Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200614 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" containerName="installer" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200631 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" containerName="installer" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200641 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200648 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200657 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200664 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200674 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200681 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200690 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200697 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200710 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200717 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200729 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200736 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200746 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200753 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200763 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200770 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200779 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200788 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200798 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200805 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200814 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200822 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200831 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200838 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200851 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200859 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200868 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200876 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200887 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200895 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="extract-utilities" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200906 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200913 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200924 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200933 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="extract-content" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200940 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200948 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200959 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200966 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: E0128 15:51:21.200976 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.200983 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201089 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="97923485-cab3-4578-ae02-4489827d63ae" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201104 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c684124-cb30-4db3-9ece-fd4baa23a639" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201112 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e140f0-9bf3-4817-af15-b215b941ba85" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201123 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201130 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce40a6c4-bba4-43dc-8aa7-3a63fd44447f" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201138 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd3dd6e-5429-4193-8531-6ba1b357358f" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201151 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35915fe-4b5b-4c69-8abb-2d2d22e423c5" containerName="marketplace-operator" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201161 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3200d9-3341-4a0b-a717-44311b50b23f" containerName="installer" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201170 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6c4494-66ec-40c7-960f-0ab4558af7d8" containerName="registry-server" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.201650 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.204588 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.205909 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.206062 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.206928 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.214841 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s82l8"] Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.215588 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.288037 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.288160 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.288189 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8mm\" (UniqueName: \"kubernetes.io/projected/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-kube-api-access-2g8mm\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.305360 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-znp46"] Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.305623 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" podUID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" containerName="controller-manager" containerID="cri-o://bc7db0b8f022386e5cc40123db15adec0fe5917426b1c2f2eed81a7b52368651" gracePeriod=30 Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.311205 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8tgg5"] Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.312184 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.320844 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt"] Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.321105 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" podUID="9f43563c-173f-4276-ac59-02fc755b6585" containerName="route-controller-manager" containerID="cri-o://b57f0c3f4e9ac56940bf7adede44429d9e93d6a71c4bfa01e1935c4a1834445e" gracePeriod=30 Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.330504 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dqbbb"] Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.342697 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8tgg5"] Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391583 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qfhq\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-kube-api-access-8qfhq\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391663 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391702 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8mm\" (UniqueName: \"kubernetes.io/projected/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-kube-api-access-2g8mm\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391719 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-registry-tls\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391741 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391767 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4988f120-45e3-4d8e-a999-9dd41a7f44df-registry-certificates\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391797 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4988f120-45e3-4d8e-a999-9dd41a7f44df-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391830 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4988f120-45e3-4d8e-a999-9dd41a7f44df-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391856 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391879 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4988f120-45e3-4d8e-a999-9dd41a7f44df-trusted-ca\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.391894 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-bound-sa-token\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.394549 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.405650 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.431293 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8mm\" (UniqueName: \"kubernetes.io/projected/1f189d2d-081e-4df7-bd5e-d9fcb326fbdd-kube-api-access-2g8mm\") pod \"marketplace-operator-79b997595-s82l8\" (UID: \"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.461457 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.493803 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qfhq\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-kube-api-access-8qfhq\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.494447 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-registry-tls\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.494641 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4988f120-45e3-4d8e-a999-9dd41a7f44df-registry-certificates\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.494790 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4988f120-45e3-4d8e-a999-9dd41a7f44df-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.494930 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4988f120-45e3-4d8e-a999-9dd41a7f44df-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.495060 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4988f120-45e3-4d8e-a999-9dd41a7f44df-trusted-ca\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.495175 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-bound-sa-token\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.495882 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4988f120-45e3-4d8e-a999-9dd41a7f44df-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.496043 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4988f120-45e3-4d8e-a999-9dd41a7f44df-registry-certificates\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.500805 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4988f120-45e3-4d8e-a999-9dd41a7f44df-trusted-ca\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.503518 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4988f120-45e3-4d8e-a999-9dd41a7f44df-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.504064 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-registry-tls\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.508028 4903 generic.go:334] "Generic (PLEG): container finished" podID="9f43563c-173f-4276-ac59-02fc755b6585" containerID="b57f0c3f4e9ac56940bf7adede44429d9e93d6a71c4bfa01e1935c4a1834445e" exitCode=0 Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.508129 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" event={"ID":"9f43563c-173f-4276-ac59-02fc755b6585","Type":"ContainerDied","Data":"b57f0c3f4e9ac56940bf7adede44429d9e93d6a71c4bfa01e1935c4a1834445e"} Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.527820 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.532148 4903 generic.go:334] "Generic (PLEG): container finished" podID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" containerID="bc7db0b8f022386e5cc40123db15adec0fe5917426b1c2f2eed81a7b52368651" exitCode=0 Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.532204 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" event={"ID":"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff","Type":"ContainerDied","Data":"bc7db0b8f022386e5cc40123db15adec0fe5917426b1c2f2eed81a7b52368651"} Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.536518 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qfhq\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-kube-api-access-8qfhq\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.565728 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4988f120-45e3-4d8e-a999-9dd41a7f44df-bound-sa-token\") pod \"image-registry-66df7c8f76-8tgg5\" (UID: \"4988f120-45e3-4d8e-a999-9dd41a7f44df\") " pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.631363 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.869901 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.907847 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:51:21 crc kubenswrapper[4903]: I0128 15:51:21.932943 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.005302 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-serving-cert\") pod \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.005362 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-client-ca\") pod \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.005382 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-proxy-ca-bundles\") pod \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.005440 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xx5q\" (UniqueName: \"kubernetes.io/projected/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-kube-api-access-5xx5q\") pod \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.005516 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-config\") pod \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\" (UID: \"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.006268 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" (UID: "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.006277 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-client-ca" (OuterVolumeSpecName: "client-ca") pod "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" (UID: "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.006693 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-config" (OuterVolumeSpecName: "config") pod "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" (UID: "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.009243 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" (UID: "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.009755 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-kube-api-access-5xx5q" (OuterVolumeSpecName: "kube-api-access-5xx5q") pod "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" (UID: "6d82ab75-41cc-46c6-8ffb-7e81bc29cfff"). InnerVolumeSpecName "kube-api-access-5xx5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.059866 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s82l8"] Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107064 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f43563c-173f-4276-ac59-02fc755b6585-serving-cert\") pod \"9f43563c-173f-4276-ac59-02fc755b6585\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107103 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-config\") pod \"9f43563c-173f-4276-ac59-02fc755b6585\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107168 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-client-ca\") pod \"9f43563c-173f-4276-ac59-02fc755b6585\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107198 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcfqg\" (UniqueName: \"kubernetes.io/projected/9f43563c-173f-4276-ac59-02fc755b6585-kube-api-access-lcfqg\") pod \"9f43563c-173f-4276-ac59-02fc755b6585\" (UID: \"9f43563c-173f-4276-ac59-02fc755b6585\") " Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107346 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xx5q\" (UniqueName: \"kubernetes.io/projected/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-kube-api-access-5xx5q\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107357 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107366 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107374 4903 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.107382 4903 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.108352 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-config" (OuterVolumeSpecName: "config") pod "9f43563c-173f-4276-ac59-02fc755b6585" (UID: "9f43563c-173f-4276-ac59-02fc755b6585"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.108394 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f43563c-173f-4276-ac59-02fc755b6585" (UID: "9f43563c-173f-4276-ac59-02fc755b6585"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.111031 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43563c-173f-4276-ac59-02fc755b6585-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f43563c-173f-4276-ac59-02fc755b6585" (UID: "9f43563c-173f-4276-ac59-02fc755b6585"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.111356 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f43563c-173f-4276-ac59-02fc755b6585-kube-api-access-lcfqg" (OuterVolumeSpecName: "kube-api-access-lcfqg") pod "9f43563c-173f-4276-ac59-02fc755b6585" (UID: "9f43563c-173f-4276-ac59-02fc755b6585"). InnerVolumeSpecName "kube-api-access-lcfqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.129732 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8tgg5"] Jan 28 15:51:22 crc kubenswrapper[4903]: W0128 15:51:22.134132 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4988f120_45e3_4d8e_a999_9dd41a7f44df.slice/crio-8370c8ce089c0d45e265b3eb60873b517f78f96bd1f8254fafeb2f629108e69e WatchSource:0}: Error finding container 8370c8ce089c0d45e265b3eb60873b517f78f96bd1f8254fafeb2f629108e69e: Status 404 returned error can't find the container with id 8370c8ce089c0d45e265b3eb60873b517f78f96bd1f8254fafeb2f629108e69e Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.208003 4903 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.209050 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcfqg\" (UniqueName: \"kubernetes.io/projected/9f43563c-173f-4276-ac59-02fc755b6585-kube-api-access-lcfqg\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.209145 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f43563c-173f-4276-ac59-02fc755b6585-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.209204 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f43563c-173f-4276-ac59-02fc755b6585-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.538722 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" event={"ID":"4988f120-45e3-4d8e-a999-9dd41a7f44df","Type":"ContainerStarted","Data":"1ce891f59ec6a209a36c7769f70139dda303ede6e8d507228b002f9f224b56cc"} Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.539327 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" event={"ID":"4988f120-45e3-4d8e-a999-9dd41a7f44df","Type":"ContainerStarted","Data":"8370c8ce089c0d45e265b3eb60873b517f78f96bd1f8254fafeb2f629108e69e"} Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.539428 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.540224 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" event={"ID":"6d82ab75-41cc-46c6-8ffb-7e81bc29cfff","Type":"ContainerDied","Data":"fdc6fc75a1f514d7d4524fe336089c0a93d818ccfeb33dda26db278af90c399e"} Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.540276 4903 scope.go:117] "RemoveContainer" containerID="bc7db0b8f022386e5cc40123db15adec0fe5917426b1c2f2eed81a7b52368651" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.540373 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-znp46" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.543257 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.543700 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt" event={"ID":"9f43563c-173f-4276-ac59-02fc755b6585","Type":"ContainerDied","Data":"49c65416cad9c207aa297c0bd2540d4fc76cb2ab04eded387489ea5b54d6117b"} Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.545378 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" event={"ID":"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd","Type":"ContainerStarted","Data":"b0643e40f8dbf221ed8956ce1b2118e669b37bf0063b11456b60c073c7fb99d8"} Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.545417 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" event={"ID":"1f189d2d-081e-4df7-bd5e-d9fcb326fbdd","Type":"ContainerStarted","Data":"fd2017b9937d84eb96b4b877e1dc840ed29e0ad65bd271f6630293721cd5bd5d"} Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.545778 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.550461 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.555321 4903 scope.go:117] "RemoveContainer" containerID="b57f0c3f4e9ac56940bf7adede44429d9e93d6a71c4bfa01e1935c4a1834445e" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.560263 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" podStartSLOduration=1.560248828 podStartE2EDuration="1.560248828s" podCreationTimestamp="2026-01-28 15:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:51:22.557699356 +0000 UTC m=+354.833670877" watchObservedRunningTime="2026-01-28 15:51:22.560248828 +0000 UTC m=+354.836220339" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.596095 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s82l8" podStartSLOduration=1.5960753140000001 podStartE2EDuration="1.596075314s" podCreationTimestamp="2026-01-28 15:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:51:22.579498489 +0000 UTC m=+354.855470000" watchObservedRunningTime="2026-01-28 15:51:22.596075314 +0000 UTC m=+354.872046825" Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.598230 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-znp46"] Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.602216 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-znp46"] Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.616629 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt"] Jan 28 15:51:22 crc kubenswrapper[4903]: I0128 15:51:22.620231 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t4vvt"] Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.191393 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf"] Jan 28 15:51:23 crc kubenswrapper[4903]: E0128 15:51:23.191945 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" containerName="controller-manager" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.192042 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" containerName="controller-manager" Jan 28 15:51:23 crc kubenswrapper[4903]: E0128 15:51:23.192153 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f43563c-173f-4276-ac59-02fc755b6585" containerName="route-controller-manager" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.192226 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f43563c-173f-4276-ac59-02fc755b6585" containerName="route-controller-manager" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.192412 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f43563c-173f-4276-ac59-02fc755b6585" containerName="route-controller-manager" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.192514 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" containerName="controller-manager" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.193063 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.195240 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84f6b5785b-vng8v"] Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.195839 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.196017 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.196024 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.196183 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.198419 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.198617 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.198978 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.199088 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.199193 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.199269 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.199462 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.199594 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.206198 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.208512 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf"] Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.210876 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.220591 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84f6b5785b-vng8v"] Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322378 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnlwt\" (UniqueName: \"kubernetes.io/projected/a2cda318-0ae9-4565-bca6-f1407913545a-kube-api-access-cnlwt\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322436 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2cda318-0ae9-4565-bca6-f1407913545a-serving-cert\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322486 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-config\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322517 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-config\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322565 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-client-ca\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322606 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jvv\" (UniqueName: \"kubernetes.io/projected/22cbef70-5a90-4a27-b82c-f433cf004687-kube-api-access-t9jvv\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322626 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-proxy-ca-bundles\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322649 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-client-ca\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.322675 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cbef70-5a90-4a27-b82c-f433cf004687-serving-cert\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.423946 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cbef70-5a90-4a27-b82c-f433cf004687-serving-cert\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424022 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnlwt\" (UniqueName: \"kubernetes.io/projected/a2cda318-0ae9-4565-bca6-f1407913545a-kube-api-access-cnlwt\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424052 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2cda318-0ae9-4565-bca6-f1407913545a-serving-cert\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424113 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-config\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424146 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-config\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424170 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-client-ca\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424207 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jvv\" (UniqueName: \"kubernetes.io/projected/22cbef70-5a90-4a27-b82c-f433cf004687-kube-api-access-t9jvv\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424229 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-proxy-ca-bundles\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.424251 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-client-ca\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.425261 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-client-ca\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.425951 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-client-ca\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.426696 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-proxy-ca-bundles\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.427185 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-config\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.427377 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-config\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.430656 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cbef70-5a90-4a27-b82c-f433cf004687-serving-cert\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.440105 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2cda318-0ae9-4565-bca6-f1407913545a-serving-cert\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.447430 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnlwt\" (UniqueName: \"kubernetes.io/projected/a2cda318-0ae9-4565-bca6-f1407913545a-kube-api-access-cnlwt\") pod \"route-controller-manager-58bfc7d8bb-vxgmf\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.451487 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jvv\" (UniqueName: \"kubernetes.io/projected/22cbef70-5a90-4a27-b82c-f433cf004687-kube-api-access-t9jvv\") pod \"controller-manager-84f6b5785b-vng8v\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.510258 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.519102 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.726334 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf"] Jan 28 15:51:23 crc kubenswrapper[4903]: W0128 15:51:23.741128 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2cda318_0ae9_4565_bca6_f1407913545a.slice/crio-845e9f97bdff91986781f82f1c0ef43e5ca6147f8f6e900e91e5bcead39c9198 WatchSource:0}: Error finding container 845e9f97bdff91986781f82f1c0ef43e5ca6147f8f6e900e91e5bcead39c9198: Status 404 returned error can't find the container with id 845e9f97bdff91986781f82f1c0ef43e5ca6147f8f6e900e91e5bcead39c9198 Jan 28 15:51:23 crc kubenswrapper[4903]: I0128 15:51:23.785502 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84f6b5785b-vng8v"] Jan 28 15:51:23 crc kubenswrapper[4903]: W0128 15:51:23.792640 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22cbef70_5a90_4a27_b82c_f433cf004687.slice/crio-b9ed37f731519a122d18c5f69f7969dd2d64efc15b34f152ea1c1cee4346fdd2 WatchSource:0}: Error finding container b9ed37f731519a122d18c5f69f7969dd2d64efc15b34f152ea1c1cee4346fdd2: Status 404 returned error can't find the container with id b9ed37f731519a122d18c5f69f7969dd2d64efc15b34f152ea1c1cee4346fdd2 Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.025299 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.427355 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d82ab75-41cc-46c6-8ffb-7e81bc29cfff" path="/var/lib/kubelet/pods/6d82ab75-41cc-46c6-8ffb-7e81bc29cfff/volumes" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.428064 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f43563c-173f-4276-ac59-02fc755b6585" path="/var/lib/kubelet/pods/9f43563c-173f-4276-ac59-02fc755b6585/volumes" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.560816 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" event={"ID":"a2cda318-0ae9-4565-bca6-f1407913545a","Type":"ContainerStarted","Data":"48add5c8956f4fcfb2b77529d4d22a21d95531721910c22b83177aff6b430551"} Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.560878 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" event={"ID":"a2cda318-0ae9-4565-bca6-f1407913545a","Type":"ContainerStarted","Data":"845e9f97bdff91986781f82f1c0ef43e5ca6147f8f6e900e91e5bcead39c9198"} Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.561042 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.562672 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" event={"ID":"22cbef70-5a90-4a27-b82c-f433cf004687","Type":"ContainerStarted","Data":"0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6"} Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.562707 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" event={"ID":"22cbef70-5a90-4a27-b82c-f433cf004687","Type":"ContainerStarted","Data":"b9ed37f731519a122d18c5f69f7969dd2d64efc15b34f152ea1c1cee4346fdd2"} Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.563489 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.566431 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.567609 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.582258 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" podStartSLOduration=3.582243753 podStartE2EDuration="3.582243753s" podCreationTimestamp="2026-01-28 15:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:51:24.580835753 +0000 UTC m=+356.856807264" watchObservedRunningTime="2026-01-28 15:51:24.582243753 +0000 UTC m=+356.858215264" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.619718 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" podStartSLOduration=3.619699954 podStartE2EDuration="3.619699954s" podCreationTimestamp="2026-01-28 15:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:51:24.618278494 +0000 UTC m=+356.894250005" watchObservedRunningTime="2026-01-28 15:51:24.619699954 +0000 UTC m=+356.895671465" Jan 28 15:51:24 crc kubenswrapper[4903]: I0128 15:51:24.728041 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84f6b5785b-vng8v"] Jan 28 15:51:26 crc kubenswrapper[4903]: I0128 15:51:26.571500 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" podUID="22cbef70-5a90-4a27-b82c-f433cf004687" containerName="controller-manager" containerID="cri-o://0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6" gracePeriod=30 Jan 28 15:51:26 crc kubenswrapper[4903]: I0128 15:51:26.613670 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:51:26 crc kubenswrapper[4903]: I0128 15:51:26.613729 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.004626 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.047832 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-857477fb55-kvsm4"] Jan 28 15:51:27 crc kubenswrapper[4903]: E0128 15:51:27.048097 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cbef70-5a90-4a27-b82c-f433cf004687" containerName="controller-manager" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.048116 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cbef70-5a90-4a27-b82c-f433cf004687" containerName="controller-manager" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.048243 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="22cbef70-5a90-4a27-b82c-f433cf004687" containerName="controller-manager" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.048711 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.062805 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-857477fb55-kvsm4"] Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183339 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cbef70-5a90-4a27-b82c-f433cf004687-serving-cert\") pod \"22cbef70-5a90-4a27-b82c-f433cf004687\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183402 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-client-ca\") pod \"22cbef70-5a90-4a27-b82c-f433cf004687\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183429 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9jvv\" (UniqueName: \"kubernetes.io/projected/22cbef70-5a90-4a27-b82c-f433cf004687-kube-api-access-t9jvv\") pod \"22cbef70-5a90-4a27-b82c-f433cf004687\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183504 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-proxy-ca-bundles\") pod \"22cbef70-5a90-4a27-b82c-f433cf004687\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183608 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-config\") pod \"22cbef70-5a90-4a27-b82c-f433cf004687\" (UID: \"22cbef70-5a90-4a27-b82c-f433cf004687\") " Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183800 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-serving-cert\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183864 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz757\" (UniqueName: \"kubernetes.io/projected/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-kube-api-access-vz757\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183896 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-config\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183923 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-client-ca\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.183944 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-proxy-ca-bundles\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.185129 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "22cbef70-5a90-4a27-b82c-f433cf004687" (UID: "22cbef70-5a90-4a27-b82c-f433cf004687"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.185406 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-config" (OuterVolumeSpecName: "config") pod "22cbef70-5a90-4a27-b82c-f433cf004687" (UID: "22cbef70-5a90-4a27-b82c-f433cf004687"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.185823 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-client-ca" (OuterVolumeSpecName: "client-ca") pod "22cbef70-5a90-4a27-b82c-f433cf004687" (UID: "22cbef70-5a90-4a27-b82c-f433cf004687"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.189395 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22cbef70-5a90-4a27-b82c-f433cf004687-kube-api-access-t9jvv" (OuterVolumeSpecName: "kube-api-access-t9jvv") pod "22cbef70-5a90-4a27-b82c-f433cf004687" (UID: "22cbef70-5a90-4a27-b82c-f433cf004687"). InnerVolumeSpecName "kube-api-access-t9jvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.196666 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22cbef70-5a90-4a27-b82c-f433cf004687-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "22cbef70-5a90-4a27-b82c-f433cf004687" (UID: "22cbef70-5a90-4a27-b82c-f433cf004687"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.284911 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-proxy-ca-bundles\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.284962 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-serving-cert\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.285043 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz757\" (UniqueName: \"kubernetes.io/projected/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-kube-api-access-vz757\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.285883 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-config\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.285915 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-client-ca\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.285966 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.285976 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22cbef70-5a90-4a27-b82c-f433cf004687-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.285985 4903 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.285997 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9jvv\" (UniqueName: \"kubernetes.io/projected/22cbef70-5a90-4a27-b82c-f433cf004687-kube-api-access-t9jvv\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.286024 4903 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22cbef70-5a90-4a27-b82c-f433cf004687-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.286932 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-client-ca\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.287059 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-config\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.287711 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-proxy-ca-bundles\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.289985 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-serving-cert\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.303981 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz757\" (UniqueName: \"kubernetes.io/projected/a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff-kube-api-access-vz757\") pod \"controller-manager-857477fb55-kvsm4\" (UID: \"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff\") " pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.366979 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.578190 4903 generic.go:334] "Generic (PLEG): container finished" podID="22cbef70-5a90-4a27-b82c-f433cf004687" containerID="0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6" exitCode=0 Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.578245 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" event={"ID":"22cbef70-5a90-4a27-b82c-f433cf004687","Type":"ContainerDied","Data":"0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6"} Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.578250 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.578282 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f6b5785b-vng8v" event={"ID":"22cbef70-5a90-4a27-b82c-f433cf004687","Type":"ContainerDied","Data":"b9ed37f731519a122d18c5f69f7969dd2d64efc15b34f152ea1c1cee4346fdd2"} Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.578304 4903 scope.go:117] "RemoveContainer" containerID="0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.604481 4903 scope.go:117] "RemoveContainer" containerID="0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.611417 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84f6b5785b-vng8v"] Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.613824 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84f6b5785b-vng8v"] Jan 28 15:51:27 crc kubenswrapper[4903]: E0128 15:51:27.613806 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6\": container with ID starting with 0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6 not found: ID does not exist" containerID="0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.613907 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6"} err="failed to get container status \"0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6\": rpc error: code = NotFound desc = could not find container \"0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6\": container with ID starting with 0d72623fdb54c5b2d286874e5360ceb3d2a9cf8c669749fea4300151197291c6 not found: ID does not exist" Jan 28 15:51:27 crc kubenswrapper[4903]: I0128 15:51:27.845559 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-857477fb55-kvsm4"] Jan 28 15:51:28 crc kubenswrapper[4903]: I0128 15:51:28.421875 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22cbef70-5a90-4a27-b82c-f433cf004687" path="/var/lib/kubelet/pods/22cbef70-5a90-4a27-b82c-f433cf004687/volumes" Jan 28 15:51:28 crc kubenswrapper[4903]: I0128 15:51:28.591454 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" event={"ID":"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff","Type":"ContainerStarted","Data":"b72910cd5c9daa25162f8c35a6fce14a918ddaaf19b6fc31079b83ea0122ac24"} Jan 28 15:51:28 crc kubenswrapper[4903]: I0128 15:51:28.591913 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" event={"ID":"a6477e3b-8ca3-4c85-93bd-a5645bf2f9ff","Type":"ContainerStarted","Data":"554a145962cbcc85a11e877f69747d0edc4ebb52b1a9e8d8136b8be6bf306d1c"} Jan 28 15:51:28 crc kubenswrapper[4903]: I0128 15:51:28.591938 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:28 crc kubenswrapper[4903]: I0128 15:51:28.597570 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" Jan 28 15:51:28 crc kubenswrapper[4903]: I0128 15:51:28.624593 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-857477fb55-kvsm4" podStartSLOduration=4.623369359 podStartE2EDuration="4.623369359s" podCreationTimestamp="2026-01-28 15:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:51:28.616668527 +0000 UTC m=+360.892640038" watchObservedRunningTime="2026-01-28 15:51:28.623369359 +0000 UTC m=+360.899340870" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.089546 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qh5wp"] Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.092095 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.095620 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.100104 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qh5wp"] Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.189276 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmpm\" (UniqueName: \"kubernetes.io/projected/992abaea-da7a-4789-8903-b5e95b0fb4ba-kube-api-access-snmpm\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.189348 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992abaea-da7a-4789-8903-b5e95b0fb4ba-utilities\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.189433 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992abaea-da7a-4789-8903-b5e95b0fb4ba-catalog-content\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.290732 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snmpm\" (UniqueName: \"kubernetes.io/projected/992abaea-da7a-4789-8903-b5e95b0fb4ba-kube-api-access-snmpm\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.291074 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992abaea-da7a-4789-8903-b5e95b0fb4ba-utilities\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.291274 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992abaea-da7a-4789-8903-b5e95b0fb4ba-catalog-content\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.292040 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/992abaea-da7a-4789-8903-b5e95b0fb4ba-utilities\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.292189 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/992abaea-da7a-4789-8903-b5e95b0fb4ba-catalog-content\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.292609 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2slh"] Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.293798 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.297575 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.308064 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2slh"] Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.317024 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmpm\" (UniqueName: \"kubernetes.io/projected/992abaea-da7a-4789-8903-b5e95b0fb4ba-kube-api-access-snmpm\") pod \"certified-operators-qh5wp\" (UID: \"992abaea-da7a-4789-8903-b5e95b0fb4ba\") " pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.392429 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf22k\" (UniqueName: \"kubernetes.io/projected/205dcee3-f878-45d6-8b6d-9050cc045101-kube-api-access-zf22k\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.392481 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-utilities\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.392634 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-catalog-content\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.408521 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.494277 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf22k\" (UniqueName: \"kubernetes.io/projected/205dcee3-f878-45d6-8b6d-9050cc045101-kube-api-access-zf22k\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.494766 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-utilities\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.494937 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-catalog-content\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.495581 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-utilities\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.495815 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-catalog-content\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.520363 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf22k\" (UniqueName: \"kubernetes.io/projected/205dcee3-f878-45d6-8b6d-9050cc045101-kube-api-access-zf22k\") pod \"community-operators-w2slh\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.621068 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:40 crc kubenswrapper[4903]: I0128 15:51:40.857425 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qh5wp"] Jan 28 15:51:40 crc kubenswrapper[4903]: W0128 15:51:40.863283 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992abaea_da7a_4789_8903_b5e95b0fb4ba.slice/crio-499c6c79d1d5024a593da9d8353be40602e5a731b5ff304cae4a95886d9c8da8 WatchSource:0}: Error finding container 499c6c79d1d5024a593da9d8353be40602e5a731b5ff304cae4a95886d9c8da8: Status 404 returned error can't find the container with id 499c6c79d1d5024a593da9d8353be40602e5a731b5ff304cae4a95886d9c8da8 Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.015732 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2slh"] Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.643159 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8tgg5" Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.671901 4903 generic.go:334] "Generic (PLEG): container finished" podID="992abaea-da7a-4789-8903-b5e95b0fb4ba" containerID="0a00ffdf40fc7166666d9e27739b416c71fc1869a5103cdccbaad3b8d9bafd3b" exitCode=0 Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.672039 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh5wp" event={"ID":"992abaea-da7a-4789-8903-b5e95b0fb4ba","Type":"ContainerDied","Data":"0a00ffdf40fc7166666d9e27739b416c71fc1869a5103cdccbaad3b8d9bafd3b"} Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.672091 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh5wp" event={"ID":"992abaea-da7a-4789-8903-b5e95b0fb4ba","Type":"ContainerStarted","Data":"499c6c79d1d5024a593da9d8353be40602e5a731b5ff304cae4a95886d9c8da8"} Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.675988 4903 generic.go:334] "Generic (PLEG): container finished" podID="205dcee3-f878-45d6-8b6d-9050cc045101" containerID="e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc" exitCode=0 Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.676074 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2slh" event={"ID":"205dcee3-f878-45d6-8b6d-9050cc045101","Type":"ContainerDied","Data":"e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc"} Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.676126 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2slh" event={"ID":"205dcee3-f878-45d6-8b6d-9050cc045101","Type":"ContainerStarted","Data":"1398427f8ac79367c616919ae6f786824277252d72d15c6a214cc96399d270af"} Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.729698 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8t5gp"] Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.891070 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jbb"] Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.893080 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.895088 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.927275 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jbb"] Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.946884 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpnw\" (UniqueName: \"kubernetes.io/projected/eaaba70c-318d-4992-bfca-fd9ac7216a50-kube-api-access-shpnw\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.946925 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaba70c-318d-4992-bfca-fd9ac7216a50-utilities\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:41 crc kubenswrapper[4903]: I0128 15:51:41.947177 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaba70c-318d-4992-bfca-fd9ac7216a50-catalog-content\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.047851 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaba70c-318d-4992-bfca-fd9ac7216a50-catalog-content\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.047930 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpnw\" (UniqueName: \"kubernetes.io/projected/eaaba70c-318d-4992-bfca-fd9ac7216a50-kube-api-access-shpnw\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.047950 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaba70c-318d-4992-bfca-fd9ac7216a50-utilities\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.048442 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaaba70c-318d-4992-bfca-fd9ac7216a50-catalog-content\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.048478 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaaba70c-318d-4992-bfca-fd9ac7216a50-utilities\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.069442 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpnw\" (UniqueName: \"kubernetes.io/projected/eaaba70c-318d-4992-bfca-fd9ac7216a50-kube-api-access-shpnw\") pod \"redhat-marketplace-b7jbb\" (UID: \"eaaba70c-318d-4992-bfca-fd9ac7216a50\") " pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.253219 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.656568 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7jbb"] Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.682138 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jbb" event={"ID":"eaaba70c-318d-4992-bfca-fd9ac7216a50","Type":"ContainerStarted","Data":"be845a50f212c6a3680ba81a95e3d1fe16dd8d257ab178aedb308f2ec1554fb3"} Jan 28 15:51:42 crc kubenswrapper[4903]: I0128 15:51:42.684035 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh5wp" event={"ID":"992abaea-da7a-4789-8903-b5e95b0fb4ba","Type":"ContainerStarted","Data":"ec64f0abf46e9fe3d32c8a191be06bc1597100497088c17826c5c881566ad50c"} Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.287752 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9s6n"] Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.289188 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.291950 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.306825 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9s6n"] Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.366015 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c546c08d-16e9-4d6f-b474-8602788b2dfc-utilities\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.366085 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c546c08d-16e9-4d6f-b474-8602788b2dfc-catalog-content\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.366292 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9lw\" (UniqueName: \"kubernetes.io/projected/c546c08d-16e9-4d6f-b474-8602788b2dfc-kube-api-access-7v9lw\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.467465 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v9lw\" (UniqueName: \"kubernetes.io/projected/c546c08d-16e9-4d6f-b474-8602788b2dfc-kube-api-access-7v9lw\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.467653 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c546c08d-16e9-4d6f-b474-8602788b2dfc-utilities\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.467733 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c546c08d-16e9-4d6f-b474-8602788b2dfc-catalog-content\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.468371 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c546c08d-16e9-4d6f-b474-8602788b2dfc-utilities\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.469141 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c546c08d-16e9-4d6f-b474-8602788b2dfc-catalog-content\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.491789 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v9lw\" (UniqueName: \"kubernetes.io/projected/c546c08d-16e9-4d6f-b474-8602788b2dfc-kube-api-access-7v9lw\") pod \"redhat-operators-f9s6n\" (UID: \"c546c08d-16e9-4d6f-b474-8602788b2dfc\") " pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.601730 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.692436 4903 generic.go:334] "Generic (PLEG): container finished" podID="eaaba70c-318d-4992-bfca-fd9ac7216a50" containerID="d4f1474d2ad86482d6262a46e1990d7f7d84bb7df0e1c16cf0c4221a00cf65a3" exitCode=0 Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.692574 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jbb" event={"ID":"eaaba70c-318d-4992-bfca-fd9ac7216a50","Type":"ContainerDied","Data":"d4f1474d2ad86482d6262a46e1990d7f7d84bb7df0e1c16cf0c4221a00cf65a3"} Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.697129 4903 generic.go:334] "Generic (PLEG): container finished" podID="205dcee3-f878-45d6-8b6d-9050cc045101" containerID="c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618" exitCode=0 Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.697209 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2slh" event={"ID":"205dcee3-f878-45d6-8b6d-9050cc045101","Type":"ContainerDied","Data":"c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618"} Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.701262 4903 generic.go:334] "Generic (PLEG): container finished" podID="992abaea-da7a-4789-8903-b5e95b0fb4ba" containerID="ec64f0abf46e9fe3d32c8a191be06bc1597100497088c17826c5c881566ad50c" exitCode=0 Jan 28 15:51:43 crc kubenswrapper[4903]: I0128 15:51:43.701305 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh5wp" event={"ID":"992abaea-da7a-4789-8903-b5e95b0fb4ba","Type":"ContainerDied","Data":"ec64f0abf46e9fe3d32c8a191be06bc1597100497088c17826c5c881566ad50c"} Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.003935 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9s6n"] Jan 28 15:51:44 crc kubenswrapper[4903]: W0128 15:51:44.011829 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc546c08d_16e9_4d6f_b474_8602788b2dfc.slice/crio-3ebe5e1777c6645baf4cea649722abac5fa72bbb229bffbc6dca0c1006f8f95c WatchSource:0}: Error finding container 3ebe5e1777c6645baf4cea649722abac5fa72bbb229bffbc6dca0c1006f8f95c: Status 404 returned error can't find the container with id 3ebe5e1777c6645baf4cea649722abac5fa72bbb229bffbc6dca0c1006f8f95c Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.270196 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf"] Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.270792 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" podUID="a2cda318-0ae9-4565-bca6-f1407913545a" containerName="route-controller-manager" containerID="cri-o://48add5c8956f4fcfb2b77529d4d22a21d95531721910c22b83177aff6b430551" gracePeriod=30 Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.707886 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh5wp" event={"ID":"992abaea-da7a-4789-8903-b5e95b0fb4ba","Type":"ContainerStarted","Data":"d21cc3743a88258ecbee80136b9a1766fe53cddfd478499b1227ed3dcc49ae3d"} Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.708937 4903 generic.go:334] "Generic (PLEG): container finished" podID="c546c08d-16e9-4d6f-b474-8602788b2dfc" containerID="2f5fb5d310e475ec9b2a3b1b48c894d91b8aa288c416ab8e41b2040daef7bbee" exitCode=0 Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.708998 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9s6n" event={"ID":"c546c08d-16e9-4d6f-b474-8602788b2dfc","Type":"ContainerDied","Data":"2f5fb5d310e475ec9b2a3b1b48c894d91b8aa288c416ab8e41b2040daef7bbee"} Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.709023 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9s6n" event={"ID":"c546c08d-16e9-4d6f-b474-8602788b2dfc","Type":"ContainerStarted","Data":"3ebe5e1777c6645baf4cea649722abac5fa72bbb229bffbc6dca0c1006f8f95c"} Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.711132 4903 generic.go:334] "Generic (PLEG): container finished" podID="a2cda318-0ae9-4565-bca6-f1407913545a" containerID="48add5c8956f4fcfb2b77529d4d22a21d95531721910c22b83177aff6b430551" exitCode=0 Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.711197 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" event={"ID":"a2cda318-0ae9-4565-bca6-f1407913545a","Type":"ContainerDied","Data":"48add5c8956f4fcfb2b77529d4d22a21d95531721910c22b83177aff6b430551"} Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.714348 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2slh" event={"ID":"205dcee3-f878-45d6-8b6d-9050cc045101","Type":"ContainerStarted","Data":"e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6"} Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.753936 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qh5wp" podStartSLOduration=2.17429738 podStartE2EDuration="4.753914635s" podCreationTimestamp="2026-01-28 15:51:40 +0000 UTC" firstStartedPulling="2026-01-28 15:51:41.674495165 +0000 UTC m=+373.950466716" lastFinishedPulling="2026-01-28 15:51:44.25411245 +0000 UTC m=+376.530083971" observedRunningTime="2026-01-28 15:51:44.729916029 +0000 UTC m=+377.005887550" watchObservedRunningTime="2026-01-28 15:51:44.753914635 +0000 UTC m=+377.029886166" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.754492 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2slh" podStartSLOduration=2.287717227 podStartE2EDuration="4.754486162s" podCreationTimestamp="2026-01-28 15:51:40 +0000 UTC" firstStartedPulling="2026-01-28 15:51:41.678795058 +0000 UTC m=+373.954766619" lastFinishedPulling="2026-01-28 15:51:44.145564043 +0000 UTC m=+376.421535554" observedRunningTime="2026-01-28 15:51:44.75161204 +0000 UTC m=+377.027583571" watchObservedRunningTime="2026-01-28 15:51:44.754486162 +0000 UTC m=+377.030457673" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.811570 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.893322 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2cda318-0ae9-4565-bca6-f1407913545a-serving-cert\") pod \"a2cda318-0ae9-4565-bca6-f1407913545a\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.893366 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnlwt\" (UniqueName: \"kubernetes.io/projected/a2cda318-0ae9-4565-bca6-f1407913545a-kube-api-access-cnlwt\") pod \"a2cda318-0ae9-4565-bca6-f1407913545a\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.893415 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-config\") pod \"a2cda318-0ae9-4565-bca6-f1407913545a\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.893451 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-client-ca\") pod \"a2cda318-0ae9-4565-bca6-f1407913545a\" (UID: \"a2cda318-0ae9-4565-bca6-f1407913545a\") " Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.894134 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a2cda318-0ae9-4565-bca6-f1407913545a" (UID: "a2cda318-0ae9-4565-bca6-f1407913545a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.894518 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-config" (OuterVolumeSpecName: "config") pod "a2cda318-0ae9-4565-bca6-f1407913545a" (UID: "a2cda318-0ae9-4565-bca6-f1407913545a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.901448 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cda318-0ae9-4565-bca6-f1407913545a-kube-api-access-cnlwt" (OuterVolumeSpecName: "kube-api-access-cnlwt") pod "a2cda318-0ae9-4565-bca6-f1407913545a" (UID: "a2cda318-0ae9-4565-bca6-f1407913545a"). InnerVolumeSpecName "kube-api-access-cnlwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.901936 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cda318-0ae9-4565-bca6-f1407913545a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a2cda318-0ae9-4565-bca6-f1407913545a" (UID: "a2cda318-0ae9-4565-bca6-f1407913545a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.995254 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnlwt\" (UniqueName: \"kubernetes.io/projected/a2cda318-0ae9-4565-bca6-f1407913545a-kube-api-access-cnlwt\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.995313 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.995327 4903 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2cda318-0ae9-4565-bca6-f1407913545a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:44 crc kubenswrapper[4903]: I0128 15:51:44.995347 4903 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2cda318-0ae9-4565-bca6-f1407913545a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:45 crc kubenswrapper[4903]: I0128 15:51:45.725236 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" Jan 28 15:51:45 crc kubenswrapper[4903]: I0128 15:51:45.725257 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf" event={"ID":"a2cda318-0ae9-4565-bca6-f1407913545a","Type":"ContainerDied","Data":"845e9f97bdff91986781f82f1c0ef43e5ca6147f8f6e900e91e5bcead39c9198"} Jan 28 15:51:45 crc kubenswrapper[4903]: I0128 15:51:45.725629 4903 scope.go:117] "RemoveContainer" containerID="48add5c8956f4fcfb2b77529d4d22a21d95531721910c22b83177aff6b430551" Jan 28 15:51:45 crc kubenswrapper[4903]: I0128 15:51:45.727432 4903 generic.go:334] "Generic (PLEG): container finished" podID="eaaba70c-318d-4992-bfca-fd9ac7216a50" containerID="965a06174b84b019fce05f0114136295c450108f21787efef9378fe3fc179d05" exitCode=0 Jan 28 15:51:45 crc kubenswrapper[4903]: I0128 15:51:45.727554 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jbb" event={"ID":"eaaba70c-318d-4992-bfca-fd9ac7216a50","Type":"ContainerDied","Data":"965a06174b84b019fce05f0114136295c450108f21787efef9378fe3fc179d05"} Jan 28 15:51:45 crc kubenswrapper[4903]: I0128 15:51:45.783464 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf"] Jan 28 15:51:45 crc kubenswrapper[4903]: I0128 15:51:45.788189 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bfc7d8bb-vxgmf"] Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.208441 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57"] Jan 28 15:51:46 crc kubenswrapper[4903]: E0128 15:51:46.208745 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cda318-0ae9-4565-bca6-f1407913545a" containerName="route-controller-manager" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.208773 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cda318-0ae9-4565-bca6-f1407913545a" containerName="route-controller-manager" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.208927 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cda318-0ae9-4565-bca6-f1407913545a" containerName="route-controller-manager" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.209502 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.211406 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.211665 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.212372 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.212380 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.212430 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.216285 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.217201 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57"] Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.308853 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900c17b3-3732-46d9-b9b9-e593d6fd712b-serving-cert\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.308912 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/900c17b3-3732-46d9-b9b9-e593d6fd712b-client-ca\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.308935 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxht\" (UniqueName: \"kubernetes.io/projected/900c17b3-3732-46d9-b9b9-e593d6fd712b-kube-api-access-bzxht\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.308978 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900c17b3-3732-46d9-b9b9-e593d6fd712b-config\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.359577 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerName="oauth-openshift" containerID="cri-o://1c82ae5bff552c82cae190673e343f3192afabaf39a5b332fc73398448551c7a" gracePeriod=15 Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.410478 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/900c17b3-3732-46d9-b9b9-e593d6fd712b-client-ca\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.410550 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzxht\" (UniqueName: \"kubernetes.io/projected/900c17b3-3732-46d9-b9b9-e593d6fd712b-kube-api-access-bzxht\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.410621 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900c17b3-3732-46d9-b9b9-e593d6fd712b-config\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.410662 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900c17b3-3732-46d9-b9b9-e593d6fd712b-serving-cert\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.412173 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/900c17b3-3732-46d9-b9b9-e593d6fd712b-client-ca\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.412373 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/900c17b3-3732-46d9-b9b9-e593d6fd712b-config\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.416089 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/900c17b3-3732-46d9-b9b9-e593d6fd712b-serving-cert\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.420721 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2cda318-0ae9-4565-bca6-f1407913545a" path="/var/lib/kubelet/pods/a2cda318-0ae9-4565-bca6-f1407913545a/volumes" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.430772 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzxht\" (UniqueName: \"kubernetes.io/projected/900c17b3-3732-46d9-b9b9-e593d6fd712b-kube-api-access-bzxht\") pod \"route-controller-manager-b899d4f65-qvn57\" (UID: \"900c17b3-3732-46d9-b9b9-e593d6fd712b\") " pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.532997 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.734188 4903 generic.go:334] "Generic (PLEG): container finished" podID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerID="1c82ae5bff552c82cae190673e343f3192afabaf39a5b332fc73398448551c7a" exitCode=0 Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.734269 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" event={"ID":"be0f6d6d-ffaf-4889-a91d-a2a79d69758a","Type":"ContainerDied","Data":"1c82ae5bff552c82cae190673e343f3192afabaf39a5b332fc73398448551c7a"} Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.740363 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7jbb" event={"ID":"eaaba70c-318d-4992-bfca-fd9ac7216a50","Type":"ContainerStarted","Data":"30a3a249d089f2b3ea820a3f32d85b87729d93093513db6a6d093df0ec7825f7"} Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.743601 4903 generic.go:334] "Generic (PLEG): container finished" podID="c546c08d-16e9-4d6f-b474-8602788b2dfc" containerID="dc34f399598a2001da991b0463186dc8a9d72243bfe6b752e1478731fdc3e3e8" exitCode=0 Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.743661 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9s6n" event={"ID":"c546c08d-16e9-4d6f-b474-8602788b2dfc","Type":"ContainerDied","Data":"dc34f399598a2001da991b0463186dc8a9d72243bfe6b752e1478731fdc3e3e8"} Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.760739 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b7jbb" podStartSLOduration=3.135787494 podStartE2EDuration="5.760723026s" podCreationTimestamp="2026-01-28 15:51:41 +0000 UTC" firstStartedPulling="2026-01-28 15:51:43.697974332 +0000 UTC m=+375.973945843" lastFinishedPulling="2026-01-28 15:51:46.322909864 +0000 UTC m=+378.598881375" observedRunningTime="2026-01-28 15:51:46.757079441 +0000 UTC m=+379.033050972" watchObservedRunningTime="2026-01-28 15:51:46.760723026 +0000 UTC m=+379.036694537" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.837882 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917154 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-trusted-ca-bundle\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917243 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-service-ca\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917277 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-dir\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917315 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-error\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917379 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-login\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917408 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-router-certs\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917450 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918157 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.917434 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-session\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918541 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-ocp-branding-template\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918580 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q26vp\" (UniqueName: \"kubernetes.io/projected/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-kube-api-access-q26vp\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918614 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-policies\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918638 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-idp-0-file-data\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918668 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-provider-selection\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918690 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-serving-cert\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.918726 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-cliconfig\") pod \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\" (UID: \"be0f6d6d-ffaf-4889-a91d-a2a79d69758a\") " Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.919163 4903 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.919187 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.919193 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.919488 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.920259 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.922744 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.923127 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.925161 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.925655 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.929318 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.935900 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.936100 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-kube-api-access-q26vp" (OuterVolumeSpecName: "kube-api-access-q26vp") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "kube-api-access-q26vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.936209 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.936356 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "be0f6d6d-ffaf-4889-a91d-a2a79d69758a" (UID: "be0f6d6d-ffaf-4889-a91d-a2a79d69758a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:51:46 crc kubenswrapper[4903]: I0128 15:51:46.959639 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57"] Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020264 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020308 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020323 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020338 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020351 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q26vp\" (UniqueName: \"kubernetes.io/projected/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-kube-api-access-q26vp\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020363 4903 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020375 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020387 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020400 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020413 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020424 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.020438 4903 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/be0f6d6d-ffaf-4889-a91d-a2a79d69758a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.755966 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" event={"ID":"900c17b3-3732-46d9-b9b9-e593d6fd712b","Type":"ContainerStarted","Data":"1050fbd49c14caa66960f5dbe4b697fcabfada348f4e2b6496da9978211ab50e"} Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.756013 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" event={"ID":"900c17b3-3732-46d9-b9b9-e593d6fd712b","Type":"ContainerStarted","Data":"dd8b5e2318221fa113990fbde144a6d9cd9731ab1f451d393d649ec1972f454a"} Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.757379 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.759388 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9s6n" event={"ID":"c546c08d-16e9-4d6f-b474-8602788b2dfc","Type":"ContainerStarted","Data":"1277201eb4c07d94cf7ffcd34afec06d4430426694bb98b2e91c69bab36fc2ef"} Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.760678 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" event={"ID":"be0f6d6d-ffaf-4889-a91d-a2a79d69758a","Type":"ContainerDied","Data":"47d48aa4767a23c93df58b35cb07eab632a9bcf2f148265f0804cc1e07409357"} Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.760710 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dqbbb" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.760754 4903 scope.go:117] "RemoveContainer" containerID="1c82ae5bff552c82cae190673e343f3192afabaf39a5b332fc73398448551c7a" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.763488 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.782549 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b899d4f65-qvn57" podStartSLOduration=3.782507191 podStartE2EDuration="3.782507191s" podCreationTimestamp="2026-01-28 15:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:51:47.776397906 +0000 UTC m=+380.052369417" watchObservedRunningTime="2026-01-28 15:51:47.782507191 +0000 UTC m=+380.058478712" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.857148 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9s6n" podStartSLOduration=2.443065911 podStartE2EDuration="4.857060355s" podCreationTimestamp="2026-01-28 15:51:43 +0000 UTC" firstStartedPulling="2026-01-28 15:51:44.710120232 +0000 UTC m=+376.986091743" lastFinishedPulling="2026-01-28 15:51:47.124114686 +0000 UTC m=+379.400086187" observedRunningTime="2026-01-28 15:51:47.834464189 +0000 UTC m=+380.110435700" watchObservedRunningTime="2026-01-28 15:51:47.857060355 +0000 UTC m=+380.133031886" Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.870173 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dqbbb"] Jan 28 15:51:47 crc kubenswrapper[4903]: I0128 15:51:47.871715 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dqbbb"] Jan 28 15:51:48 crc kubenswrapper[4903]: I0128 15:51:48.422170 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" path="/var/lib/kubelet/pods/be0f6d6d-ffaf-4889-a91d-a2a79d69758a/volumes" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.409639 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.409725 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.451720 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.621920 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.622249 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.661824 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.809867 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2slh" Jan 28 15:51:50 crc kubenswrapper[4903]: I0128 15:51:50.810786 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qh5wp" Jan 28 15:51:52 crc kubenswrapper[4903]: I0128 15:51:52.254260 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:52 crc kubenswrapper[4903]: I0128 15:51:52.254350 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:52 crc kubenswrapper[4903]: I0128 15:51:52.302445 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:52 crc kubenswrapper[4903]: I0128 15:51:52.825881 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b7jbb" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.212703 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d78cc5f67-f5b64"] Jan 28 15:51:53 crc kubenswrapper[4903]: E0128 15:51:53.213260 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerName="oauth-openshift" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.213274 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerName="oauth-openshift" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.213378 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0f6d6d-ffaf-4889-a91d-a2a79d69758a" containerName="oauth-openshift" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.213803 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.216574 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.221462 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.221733 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.221844 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.222341 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.222880 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.222928 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.223147 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.223172 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.223354 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.223715 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.222992 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.233864 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.240947 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d78cc5f67-f5b64"] Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.249408 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.256096 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.301984 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88c5p\" (UniqueName: \"kubernetes.io/projected/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-kube-api-access-88c5p\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302075 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-session\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302162 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-audit-policies\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302232 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-error\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302268 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302293 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302316 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302415 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302446 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302517 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302624 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-audit-dir\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302680 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-login\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302705 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.302785 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.403975 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-session\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404045 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-audit-policies\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404105 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-error\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404140 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404173 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404206 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404251 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404283 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404330 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404381 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-audit-dir\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404426 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-login\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404463 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404507 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404600 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88c5p\" (UniqueName: \"kubernetes.io/projected/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-kube-api-access-88c5p\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.404919 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-audit-policies\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.405141 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-audit-dir\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.405577 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.406102 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.406478 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.411081 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.411457 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-error\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.411630 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.411821 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-session\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.412037 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.412703 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.413016 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-template-login\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.415944 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.428490 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88c5p\" (UniqueName: \"kubernetes.io/projected/d3be947c-fdfa-429a-902c-d2ce6cf0b0d5-kube-api-access-88c5p\") pod \"oauth-openshift-6d78cc5f67-f5b64\" (UID: \"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5\") " pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.554923 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.602080 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.602144 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.673136 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.828019 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9s6n" Jan 28 15:51:53 crc kubenswrapper[4903]: I0128 15:51:53.992185 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d78cc5f67-f5b64"] Jan 28 15:51:54 crc kubenswrapper[4903]: I0128 15:51:54.798173 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" event={"ID":"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5","Type":"ContainerStarted","Data":"8990869d1df8d12d507898904592e613b41f057ca6631725dba6ce855371a1f8"} Jan 28 15:51:55 crc kubenswrapper[4903]: I0128 15:51:55.807825 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" event={"ID":"d3be947c-fdfa-429a-902c-d2ce6cf0b0d5","Type":"ContainerStarted","Data":"e94c8a08616c4e244418bb702fba3279848e347d0483fbc7ab67f4c57d24fad0"} Jan 28 15:51:55 crc kubenswrapper[4903]: I0128 15:51:55.808921 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:55 crc kubenswrapper[4903]: I0128 15:51:55.858643 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" podStartSLOduration=34.858622384 podStartE2EDuration="34.858622384s" podCreationTimestamp="2026-01-28 15:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:51:55.857712689 +0000 UTC m=+388.133684200" watchObservedRunningTime="2026-01-28 15:51:55.858622384 +0000 UTC m=+388.134593915" Jan 28 15:51:56 crc kubenswrapper[4903]: I0128 15:51:56.282580 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d78cc5f67-f5b64" Jan 28 15:51:56 crc kubenswrapper[4903]: I0128 15:51:56.614189 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:51:56 crc kubenswrapper[4903]: I0128 15:51:56.614255 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:52:06 crc kubenswrapper[4903]: I0128 15:52:06.780924 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" podUID="c1dff77d-5e58-42e0-bfac-040973ea3094" containerName="registry" containerID="cri-o://dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0" gracePeriod=30 Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.194238 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.320776 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1dff77d-5e58-42e0-bfac-040973ea3094-installation-pull-secrets\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.320839 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-trusted-ca\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.320902 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dpdv\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-kube-api-access-9dpdv\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.321693 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.321735 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.321798 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-bound-sa-token\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.322128 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-certificates\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.322181 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-tls\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.322240 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1dff77d-5e58-42e0-bfac-040973ea3094-ca-trust-extracted\") pod \"c1dff77d-5e58-42e0-bfac-040973ea3094\" (UID: \"c1dff77d-5e58-42e0-bfac-040973ea3094\") " Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.322657 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.322696 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.330638 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.331216 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.332547 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-kube-api-access-9dpdv" (OuterVolumeSpecName: "kube-api-access-9dpdv") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "kube-api-access-9dpdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.345097 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1dff77d-5e58-42e0-bfac-040973ea3094-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.355877 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.360963 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1dff77d-5e58-42e0-bfac-040973ea3094-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c1dff77d-5e58-42e0-bfac-040973ea3094" (UID: "c1dff77d-5e58-42e0-bfac-040973ea3094"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.423325 4903 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c1dff77d-5e58-42e0-bfac-040973ea3094-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.423628 4903 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c1dff77d-5e58-42e0-bfac-040973ea3094-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.423645 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dpdv\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-kube-api-access-9dpdv\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.423653 4903 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.423669 4903 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.423678 4903 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c1dff77d-5e58-42e0-bfac-040973ea3094-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.876677 4903 generic.go:334] "Generic (PLEG): container finished" podID="c1dff77d-5e58-42e0-bfac-040973ea3094" containerID="dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0" exitCode=0 Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.876721 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" event={"ID":"c1dff77d-5e58-42e0-bfac-040973ea3094","Type":"ContainerDied","Data":"dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0"} Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.876742 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.876750 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8t5gp" event={"ID":"c1dff77d-5e58-42e0-bfac-040973ea3094","Type":"ContainerDied","Data":"b02209a16439f41ba249bb856ea29c45ef95ea9016386b295c6c52a64b9c52e4"} Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.876769 4903 scope.go:117] "RemoveContainer" containerID="dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.896431 4903 scope.go:117] "RemoveContainer" containerID="dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0" Jan 28 15:52:07 crc kubenswrapper[4903]: E0128 15:52:07.897051 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0\": container with ID starting with dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0 not found: ID does not exist" containerID="dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.897084 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0"} err="failed to get container status \"dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0\": rpc error: code = NotFound desc = could not find container \"dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0\": container with ID starting with dd06bc2e2a9bc2de333fb39171adf793b88f0b23b35d4abd3b647e8c6f4e2bc0 not found: ID does not exist" Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.923110 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8t5gp"] Jan 28 15:52:07 crc kubenswrapper[4903]: I0128 15:52:07.928003 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8t5gp"] Jan 28 15:52:08 crc kubenswrapper[4903]: I0128 15:52:08.420437 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1dff77d-5e58-42e0-bfac-040973ea3094" path="/var/lib/kubelet/pods/c1dff77d-5e58-42e0-bfac-040973ea3094/volumes" Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.613284 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.613898 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.613953 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.614575 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4997084f57a6cd366ada9b77ed2b50e6809e074fd29397f82383459cfec25834"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.614631 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://4997084f57a6cd366ada9b77ed2b50e6809e074fd29397f82383459cfec25834" gracePeriod=600 Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.997495 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="4997084f57a6cd366ada9b77ed2b50e6809e074fd29397f82383459cfec25834" exitCode=0 Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.997565 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"4997084f57a6cd366ada9b77ed2b50e6809e074fd29397f82383459cfec25834"} Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.998142 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"2702c7586f05ef407a560aa20b4b1483f456a5aad5600fa168e29968e5042eb4"} Jan 28 15:52:26 crc kubenswrapper[4903]: I0128 15:52:26.998176 4903 scope.go:117] "RemoveContainer" containerID="f1dfd2f25c47d6c2fb26668ff0d637941fabf85ddc4602f119f44fa7b4d86621" Jan 28 15:54:26 crc kubenswrapper[4903]: I0128 15:54:26.613431 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:54:26 crc kubenswrapper[4903]: I0128 15:54:26.613948 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:54:56 crc kubenswrapper[4903]: I0128 15:54:56.614111 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:54:56 crc kubenswrapper[4903]: I0128 15:54:56.614902 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:55:26 crc kubenswrapper[4903]: I0128 15:55:26.614327 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:55:26 crc kubenswrapper[4903]: I0128 15:55:26.615139 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:55:26 crc kubenswrapper[4903]: I0128 15:55:26.615222 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:55:26 crc kubenswrapper[4903]: I0128 15:55:26.616246 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2702c7586f05ef407a560aa20b4b1483f456a5aad5600fa168e29968e5042eb4"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:55:26 crc kubenswrapper[4903]: I0128 15:55:26.616409 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://2702c7586f05ef407a560aa20b4b1483f456a5aad5600fa168e29968e5042eb4" gracePeriod=600 Jan 28 15:55:27 crc kubenswrapper[4903]: I0128 15:55:27.288941 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="2702c7586f05ef407a560aa20b4b1483f456a5aad5600fa168e29968e5042eb4" exitCode=0 Jan 28 15:55:27 crc kubenswrapper[4903]: I0128 15:55:27.288993 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"2702c7586f05ef407a560aa20b4b1483f456a5aad5600fa168e29968e5042eb4"} Jan 28 15:55:27 crc kubenswrapper[4903]: I0128 15:55:27.289369 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"076145b459522bcd2bea9cb08cae4aa7b63523e3096a45977e0c8639d4b92ae4"} Jan 28 15:55:27 crc kubenswrapper[4903]: I0128 15:55:27.289399 4903 scope.go:117] "RemoveContainer" containerID="4997084f57a6cd366ada9b77ed2b50e6809e074fd29397f82383459cfec25834" Jan 28 15:57:26 crc kubenswrapper[4903]: I0128 15:57:26.613992 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:57:26 crc kubenswrapper[4903]: I0128 15:57:26.615432 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:57:53 crc kubenswrapper[4903]: I0128 15:57:53.740058 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dwbc4"] Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.143256 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-controller" containerID="cri-o://722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.143358 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-node" containerID="cri-o://945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.143357 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-acl-logging" containerID="cri-o://3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.143394 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="northd" containerID="cri-o://46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.143444 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="sbdb" containerID="cri-o://f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.143423 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.143434 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="nbdb" containerID="cri-o://540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.174860 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" containerID="cri-o://4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" gracePeriod=30 Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.470456 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/3.log" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.473290 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovn-acl-logging/0.log" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.473943 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovn-controller/0.log" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.474597 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.529744 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vn59q"] Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.529996 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530021 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530032 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="nbdb" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530040 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="nbdb" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530053 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530061 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530070 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530077 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530092 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="sbdb" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530099 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="sbdb" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530110 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530118 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530126 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530133 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530145 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kubecfg-setup" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530153 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kubecfg-setup" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530167 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1dff77d-5e58-42e0-bfac-040973ea3094" containerName="registry" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530176 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1dff77d-5e58-42e0-bfac-040973ea3094" containerName="registry" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530188 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-acl-logging" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530195 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-acl-logging" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530205 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-node" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530212 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-node" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530224 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="northd" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530231 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="northd" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530243 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530250 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530358 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="nbdb" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530367 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530376 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530382 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530389 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="northd" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530397 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530405 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="sbdb" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530411 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-node" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530418 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovn-acl-logging" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530427 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530434 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1dff77d-5e58-42e0-bfac-040973ea3094" containerName="registry" Jan 28 15:57:54 crc kubenswrapper[4903]: E0128 15:57:54.530550 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530558 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530640 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.530793 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cc3edd-9664-4899-b496-47543927e256" containerName="ovnkube-controller" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.532201 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608514 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-slash\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608591 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-kubelet\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608621 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-systemd-units\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608642 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-slash" (OuterVolumeSpecName: "host-slash") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608647 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwk55\" (UniqueName: \"kubernetes.io/projected/29cc3edd-9664-4899-b496-47543927e256-kube-api-access-nwk55\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608689 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608710 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608773 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-node-log\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608818 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-node-log" (OuterVolumeSpecName: "node-log") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608859 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-ovn-kubernetes\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608885 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-config\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608919 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-log-socket\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608938 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29cc3edd-9664-4899-b496-47543927e256-ovn-node-metrics-cert\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608948 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608963 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-etc-openvswitch\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608976 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-log-socket" (OuterVolumeSpecName: "log-socket") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608984 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-var-lib-openvswitch\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.608997 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609012 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-var-lib-cni-networks-ovn-kubernetes\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609040 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-netd\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609034 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609060 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-env-overrides\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609082 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609086 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609089 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-bin\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609116 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609143 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-netns\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609168 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-openvswitch\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609192 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-ovn\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609222 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-script-lib\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609248 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-systemd\") pod \"29cc3edd-9664-4899-b496-47543927e256\" (UID: \"29cc3edd-9664-4899-b496-47543927e256\") " Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609253 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609316 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609263 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609436 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609580 4903 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609593 4903 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609601 4903 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609610 4903 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609598 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609644 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609617 4903 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609687 4903 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609698 4903 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609709 4903 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609719 4903 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609727 4903 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609735 4903 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609743 4903 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609751 4903 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609760 4903 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.609768 4903 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.614062 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29cc3edd-9664-4899-b496-47543927e256-kube-api-access-nwk55" (OuterVolumeSpecName: "kube-api-access-nwk55") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "kube-api-access-nwk55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.614287 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29cc3edd-9664-4899-b496-47543927e256-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.621633 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "29cc3edd-9664-4899-b496-47543927e256" (UID: "29cc3edd-9664-4899-b496-47543927e256"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.711065 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovnkube-config\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.711207 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.711256 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.711309 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-ovn\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.711345 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovn-node-metrics-cert\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.711378 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjrdw\" (UniqueName: \"kubernetes.io/projected/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-kube-api-access-qjrdw\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712683 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-cni-netd\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712733 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-run-netns\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712764 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-run-ovn-kubernetes\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712787 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-systemd-units\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712813 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-cni-bin\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712893 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-systemd\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712949 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-slash\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.712976 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-var-lib-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713080 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-etc-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713115 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-kubelet\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713162 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-log-socket\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713201 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovnkube-script-lib\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713244 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-node-log\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713279 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-env-overrides\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713395 4903 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713438 4903 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29cc3edd-9664-4899-b496-47543927e256-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713452 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwk55\" (UniqueName: \"kubernetes.io/projected/29cc3edd-9664-4899-b496-47543927e256-kube-api-access-nwk55\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713472 4903 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29cc3edd-9664-4899-b496-47543927e256-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.713487 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29cc3edd-9664-4899-b496-47543927e256-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814454 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovn-node-metrics-cert\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814511 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjrdw\" (UniqueName: \"kubernetes.io/projected/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-kube-api-access-qjrdw\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814559 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-cni-netd\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814579 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-run-netns\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814599 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-run-ovn-kubernetes\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814621 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-systemd-units\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814644 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-cni-bin\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814664 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-systemd\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814684 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-slash\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814702 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-var-lib-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814712 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-cni-netd\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814745 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-etc-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814755 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-cni-bin\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814766 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-kubelet\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814757 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-run-ovn-kubernetes\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814800 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-kubelet\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814792 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-run-netns\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814786 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-systemd-units\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814822 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-log-socket\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814806 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-log-socket\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814850 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-var-lib-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814851 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovnkube-script-lib\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814807 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-slash\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814891 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-etc-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814909 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-systemd\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814955 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-node-log\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.814992 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-env-overrides\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815015 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-node-log\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815074 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovnkube-config\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815093 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815112 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815135 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-ovn\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815183 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-ovn\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815191 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-run-openvswitch\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815191 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815612 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovnkube-script-lib\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815661 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovnkube-config\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.815927 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-env-overrides\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.820095 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-ovn-node-metrics-cert\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.830488 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjrdw\" (UniqueName: \"kubernetes.io/projected/a5afb9ef-715c-4346-af28-dafb1a7fdcc4-kube-api-access-qjrdw\") pod \"ovnkube-node-vn59q\" (UID: \"a5afb9ef-715c-4346-af28-dafb1a7fdcc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:54 crc kubenswrapper[4903]: I0128 15:57:54.848785 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.149472 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/2.log" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.149959 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/1.log" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.150008 4903 generic.go:334] "Generic (PLEG): container finished" podID="368501de-b207-4b6b-a0fb-eba74fe5ec74" containerID="8b220e2208dc7b263de1e53ad8af6f9ba881497ddd3302f155d27d444170c4b4" exitCode=2 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.150072 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerDied","Data":"8b220e2208dc7b263de1e53ad8af6f9ba881497ddd3302f155d27d444170c4b4"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.150108 4903 scope.go:117] "RemoveContainer" containerID="47e9c23f8c92f107227fe1b49765095afe94d22574b0d41000b6bb89bb41fb31" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.150692 4903 scope.go:117] "RemoveContainer" containerID="8b220e2208dc7b263de1e53ad8af6f9ba881497ddd3302f155d27d444170c4b4" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.151819 4903 generic.go:334] "Generic (PLEG): container finished" podID="a5afb9ef-715c-4346-af28-dafb1a7fdcc4" containerID="d18df78add08ee67f70394af48eb5aff795e904d0ad657dae9f38850c4538093" exitCode=0 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.151884 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerDied","Data":"d18df78add08ee67f70394af48eb5aff795e904d0ad657dae9f38850c4538093"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.151911 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"1246560675a4caef61d6cd66505dc1269fac276d5fd084eef0d813caf5eb10dd"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.154423 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovnkube-controller/3.log" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.157765 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovn-acl-logging/0.log" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158257 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dwbc4_29cc3edd-9664-4899-b496-47543927e256/ovn-controller/0.log" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158701 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" exitCode=0 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158724 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" exitCode=0 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158731 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" exitCode=0 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158738 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" exitCode=0 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158746 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" exitCode=0 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158752 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" exitCode=0 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158758 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" exitCode=143 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158765 4903 generic.go:334] "Generic (PLEG): container finished" podID="29cc3edd-9664-4899-b496-47543927e256" containerID="722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" exitCode=143 Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158766 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158781 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158802 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158812 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158822 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158830 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.158863 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160180 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160192 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160198 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160204 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160208 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160213 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160218 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160223 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160228 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160472 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160918 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160938 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160967 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160973 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160978 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160983 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.160987 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161024 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161029 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161034 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161038 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161047 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161055 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161061 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161066 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161071 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161075 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161081 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161086 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161090 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161095 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161100 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161107 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dwbc4" event={"ID":"29cc3edd-9664-4899-b496-47543927e256","Type":"ContainerDied","Data":"142b2a4f165086b669ab2b0f49ae91eed4de506993f79d8997e56c996b4f67b9"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161113 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161119 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161124 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161129 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161133 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161138 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161143 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161147 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161152 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.161157 4903 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.195955 4903 scope.go:117] "RemoveContainer" containerID="4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.211796 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.223881 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dwbc4"] Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.229917 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dwbc4"] Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.242755 4903 scope.go:117] "RemoveContainer" containerID="f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.257856 4903 scope.go:117] "RemoveContainer" containerID="540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.299342 4903 scope.go:117] "RemoveContainer" containerID="46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.314254 4903 scope.go:117] "RemoveContainer" containerID="195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.329748 4903 scope.go:117] "RemoveContainer" containerID="945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.351628 4903 scope.go:117] "RemoveContainer" containerID="3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.378022 4903 scope.go:117] "RemoveContainer" containerID="722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.396373 4903 scope.go:117] "RemoveContainer" containerID="d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.424111 4903 scope.go:117] "RemoveContainer" containerID="4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.424908 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": container with ID starting with 4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028 not found: ID does not exist" containerID="4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.424957 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} err="failed to get container status \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": rpc error: code = NotFound desc = could not find container \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": container with ID starting with 4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.424984 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.426231 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": container with ID starting with 92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa not found: ID does not exist" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.426262 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} err="failed to get container status \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": rpc error: code = NotFound desc = could not find container \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": container with ID starting with 92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.426283 4903 scope.go:117] "RemoveContainer" containerID="f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.426804 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": container with ID starting with f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237 not found: ID does not exist" containerID="f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.426878 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} err="failed to get container status \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": rpc error: code = NotFound desc = could not find container \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": container with ID starting with f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.426931 4903 scope.go:117] "RemoveContainer" containerID="540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.427337 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": container with ID starting with 540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a not found: ID does not exist" containerID="540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.427372 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} err="failed to get container status \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": rpc error: code = NotFound desc = could not find container \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": container with ID starting with 540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.427393 4903 scope.go:117] "RemoveContainer" containerID="46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.430208 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": container with ID starting with 46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541 not found: ID does not exist" containerID="46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.430291 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} err="failed to get container status \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": rpc error: code = NotFound desc = could not find container \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": container with ID starting with 46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.430321 4903 scope.go:117] "RemoveContainer" containerID="195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.430700 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": container with ID starting with 195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158 not found: ID does not exist" containerID="195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.430749 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} err="failed to get container status \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": rpc error: code = NotFound desc = could not find container \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": container with ID starting with 195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.430780 4903 scope.go:117] "RemoveContainer" containerID="945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.431115 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": container with ID starting with 945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff not found: ID does not exist" containerID="945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.431154 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} err="failed to get container status \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": rpc error: code = NotFound desc = could not find container \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": container with ID starting with 945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.431178 4903 scope.go:117] "RemoveContainer" containerID="3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.431468 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": container with ID starting with 3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0 not found: ID does not exist" containerID="3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.431499 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} err="failed to get container status \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": rpc error: code = NotFound desc = could not find container \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": container with ID starting with 3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.431517 4903 scope.go:117] "RemoveContainer" containerID="722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.431817 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": container with ID starting with 722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97 not found: ID does not exist" containerID="722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.431847 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} err="failed to get container status \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": rpc error: code = NotFound desc = could not find container \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": container with ID starting with 722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.431868 4903 scope.go:117] "RemoveContainer" containerID="d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12" Jan 28 15:57:55 crc kubenswrapper[4903]: E0128 15:57:55.432116 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": container with ID starting with d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12 not found: ID does not exist" containerID="d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.432152 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} err="failed to get container status \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": rpc error: code = NotFound desc = could not find container \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": container with ID starting with d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.432172 4903 scope.go:117] "RemoveContainer" containerID="4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.432665 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} err="failed to get container status \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": rpc error: code = NotFound desc = could not find container \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": container with ID starting with 4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.432687 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.432947 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} err="failed to get container status \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": rpc error: code = NotFound desc = could not find container \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": container with ID starting with 92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.432970 4903 scope.go:117] "RemoveContainer" containerID="f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.433228 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} err="failed to get container status \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": rpc error: code = NotFound desc = could not find container \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": container with ID starting with f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.433259 4903 scope.go:117] "RemoveContainer" containerID="540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.433495 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} err="failed to get container status \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": rpc error: code = NotFound desc = could not find container \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": container with ID starting with 540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.433520 4903 scope.go:117] "RemoveContainer" containerID="46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.433765 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} err="failed to get container status \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": rpc error: code = NotFound desc = could not find container \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": container with ID starting with 46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.433790 4903 scope.go:117] "RemoveContainer" containerID="195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.434134 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} err="failed to get container status \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": rpc error: code = NotFound desc = could not find container \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": container with ID starting with 195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.434156 4903 scope.go:117] "RemoveContainer" containerID="945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.434428 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} err="failed to get container status \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": rpc error: code = NotFound desc = could not find container \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": container with ID starting with 945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.434453 4903 scope.go:117] "RemoveContainer" containerID="3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.434699 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} err="failed to get container status \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": rpc error: code = NotFound desc = could not find container \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": container with ID starting with 3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.434724 4903 scope.go:117] "RemoveContainer" containerID="722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.435125 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} err="failed to get container status \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": rpc error: code = NotFound desc = could not find container \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": container with ID starting with 722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.435144 4903 scope.go:117] "RemoveContainer" containerID="d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.435726 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} err="failed to get container status \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": rpc error: code = NotFound desc = could not find container \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": container with ID starting with d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.435753 4903 scope.go:117] "RemoveContainer" containerID="4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.436161 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} err="failed to get container status \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": rpc error: code = NotFound desc = could not find container \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": container with ID starting with 4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.436183 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.436474 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} err="failed to get container status \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": rpc error: code = NotFound desc = could not find container \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": container with ID starting with 92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.436493 4903 scope.go:117] "RemoveContainer" containerID="f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.436803 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} err="failed to get container status \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": rpc error: code = NotFound desc = could not find container \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": container with ID starting with f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.436821 4903 scope.go:117] "RemoveContainer" containerID="540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437027 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} err="failed to get container status \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": rpc error: code = NotFound desc = could not find container \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": container with ID starting with 540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437045 4903 scope.go:117] "RemoveContainer" containerID="46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437359 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} err="failed to get container status \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": rpc error: code = NotFound desc = could not find container \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": container with ID starting with 46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437380 4903 scope.go:117] "RemoveContainer" containerID="195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437659 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} err="failed to get container status \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": rpc error: code = NotFound desc = could not find container \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": container with ID starting with 195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437687 4903 scope.go:117] "RemoveContainer" containerID="945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437919 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} err="failed to get container status \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": rpc error: code = NotFound desc = could not find container \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": container with ID starting with 945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.437938 4903 scope.go:117] "RemoveContainer" containerID="3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.438188 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} err="failed to get container status \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": rpc error: code = NotFound desc = could not find container \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": container with ID starting with 3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.438205 4903 scope.go:117] "RemoveContainer" containerID="722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.438391 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} err="failed to get container status \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": rpc error: code = NotFound desc = could not find container \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": container with ID starting with 722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.438410 4903 scope.go:117] "RemoveContainer" containerID="d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.438626 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} err="failed to get container status \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": rpc error: code = NotFound desc = could not find container \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": container with ID starting with d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.438649 4903 scope.go:117] "RemoveContainer" containerID="4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.439015 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028"} err="failed to get container status \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": rpc error: code = NotFound desc = could not find container \"4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028\": container with ID starting with 4eccba33dfa39784ce01473bd7fb6990bb311755fc1cf7d94abab639688ca028 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.439038 4903 scope.go:117] "RemoveContainer" containerID="92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.439274 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa"} err="failed to get container status \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": rpc error: code = NotFound desc = could not find container \"92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa\": container with ID starting with 92db7a0c66696807843d9b45a62e4a9e6f23f7929649039d8dc2e507ca9509aa not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.439293 4903 scope.go:117] "RemoveContainer" containerID="f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.439946 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237"} err="failed to get container status \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": rpc error: code = NotFound desc = could not find container \"f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237\": container with ID starting with f71e4676dc931dc0b57d2abfb415c4eda8a3fadbc89689885fe4c60217aa7237 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.439972 4903 scope.go:117] "RemoveContainer" containerID="540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.440410 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a"} err="failed to get container status \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": rpc error: code = NotFound desc = could not find container \"540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a\": container with ID starting with 540a7be38476ad752d63ea365d5f2b1652eb4d3943c9c5ada872826028291a1a not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.440434 4903 scope.go:117] "RemoveContainer" containerID="46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.440724 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541"} err="failed to get container status \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": rpc error: code = NotFound desc = could not find container \"46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541\": container with ID starting with 46e7ac8325038d865e688ae86f4c32c30624fd993ae1c924db258d603e95b541 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.440758 4903 scope.go:117] "RemoveContainer" containerID="195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.441109 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158"} err="failed to get container status \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": rpc error: code = NotFound desc = could not find container \"195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158\": container with ID starting with 195317998bcb0f5277a06bc2fcf77e60f85293bf6ab05d002af7697be8166158 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.441137 4903 scope.go:117] "RemoveContainer" containerID="945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.441492 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff"} err="failed to get container status \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": rpc error: code = NotFound desc = could not find container \"945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff\": container with ID starting with 945b7df9ba6d0d7c84c16e735e558ff3d1145038918a286af83574cadac1ddff not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.441578 4903 scope.go:117] "RemoveContainer" containerID="3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.443438 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0"} err="failed to get container status \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": rpc error: code = NotFound desc = could not find container \"3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0\": container with ID starting with 3f478a5f3f4396a88c7de08c4180b6635fa5b2d7bbe43e40ce1a41c5d103d4b0 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.443473 4903 scope.go:117] "RemoveContainer" containerID="722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.443931 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97"} err="failed to get container status \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": rpc error: code = NotFound desc = could not find container \"722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97\": container with ID starting with 722f38cf4afbe1bd9a184163354c418e07b5591ea3244f5551639a93e748ad97 not found: ID does not exist" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.443951 4903 scope.go:117] "RemoveContainer" containerID="d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12" Jan 28 15:57:55 crc kubenswrapper[4903]: I0128 15:57:55.444193 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12"} err="failed to get container status \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": rpc error: code = NotFound desc = could not find container \"d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12\": container with ID starting with d5f5d3c6445b965415a896f305ffafe870fd40da2d83bd518a62d5c1430ebf12 not found: ID does not exist" Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.166741 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7g6pn_368501de-b207-4b6b-a0fb-eba74fe5ec74/kube-multus/2.log" Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.167029 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7g6pn" event={"ID":"368501de-b207-4b6b-a0fb-eba74fe5ec74","Type":"ContainerStarted","Data":"a08d9bb692875aaac3e2fe67c37bdbea6c2a085e5012cfa8161d870c94e7a938"} Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.170310 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"429fb7b7a779cdad3b3ed30e12e52de0959344a34972d02f7dac2bcf352bab9d"} Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.170344 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"6a2a1c6922d48f62dbf0090561d39ed716fb9fa7da41dc3fd709370dd318e6e1"} Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.170357 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"cb4d7fa7bfc92f7d974081e0e7c8ca4be0e6025f89d4292894e6a7ec4b35106f"} Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.170370 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"aa5470e61b88cbb98754d8528f6a4572ff367375aeaf744d939d42f09957a675"} Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.170383 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"807b5a3aa96c9a99a0201e5e6133d689179b00bd179d67a897321d85261c8d72"} Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.170415 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"244f870a21cac8486baf7d1c4275c9711d6f365027d9c97c13d4c9d6666f0bd8"} Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.420517 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29cc3edd-9664-4899-b496-47543927e256" path="/var/lib/kubelet/pods/29cc3edd-9664-4899-b496-47543927e256/volumes" Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.613749 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:57:56 crc kubenswrapper[4903]: I0128 15:57:56.613813 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:57:58 crc kubenswrapper[4903]: I0128 15:57:58.187726 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"ce9b2c8773fef9769dd4d55f3b51698f7397d4ffe6983b71c65947699bd60d04"} Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.474683 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-mv7nd"] Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.476011 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.478377 4903 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-f5998" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.478457 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.479444 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.479791 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.494972 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dh6k\" (UniqueName: \"kubernetes.io/projected/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-kube-api-access-7dh6k\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.495025 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-crc-storage\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.495052 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-node-mnt\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.595638 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dh6k\" (UniqueName: \"kubernetes.io/projected/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-kube-api-access-7dh6k\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.595693 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-crc-storage\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.595720 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-node-mnt\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.595977 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-node-mnt\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.596419 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-crc-storage\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.621681 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dh6k\" (UniqueName: \"kubernetes.io/projected/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-kube-api-access-7dh6k\") pod \"crc-storage-crc-mv7nd\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: I0128 15:58:00.790814 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: E0128 15:58:00.825300 4903 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(2ed281a5ad4ff84c3eaf0ae6cc116227caa681bfede38efed4e2b2bb70b88f32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:58:00 crc kubenswrapper[4903]: E0128 15:58:00.825379 4903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(2ed281a5ad4ff84c3eaf0ae6cc116227caa681bfede38efed4e2b2bb70b88f32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: E0128 15:58:00.825405 4903 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(2ed281a5ad4ff84c3eaf0ae6cc116227caa681bfede38efed4e2b2bb70b88f32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:00 crc kubenswrapper[4903]: E0128 15:58:00.825463 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-mv7nd_crc-storage(78107eb1-8fa0-4870-92ea-da8fc6a4eaa3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-mv7nd_crc-storage(78107eb1-8fa0-4870-92ea-da8fc6a4eaa3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(2ed281a5ad4ff84c3eaf0ae6cc116227caa681bfede38efed4e2b2bb70b88f32): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-mv7nd" podUID="78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" Jan 28 15:58:01 crc kubenswrapper[4903]: I0128 15:58:01.075156 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mv7nd"] Jan 28 15:58:01 crc kubenswrapper[4903]: I0128 15:58:01.208369 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" event={"ID":"a5afb9ef-715c-4346-af28-dafb1a7fdcc4","Type":"ContainerStarted","Data":"76265716958fd11b142eac69d6c2600fe5b0ce3d6d0cea37a5134b1e72165612"} Jan 28 15:58:01 crc kubenswrapper[4903]: I0128 15:58:01.208657 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:01 crc kubenswrapper[4903]: I0128 15:58:01.208723 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:58:01 crc kubenswrapper[4903]: I0128 15:58:01.209088 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:01 crc kubenswrapper[4903]: I0128 15:58:01.253130 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" podStartSLOduration=7.253110189 podStartE2EDuration="7.253110189s" podCreationTimestamp="2026-01-28 15:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:58:01.249229133 +0000 UTC m=+753.525200644" watchObservedRunningTime="2026-01-28 15:58:01.253110189 +0000 UTC m=+753.529081700" Jan 28 15:58:01 crc kubenswrapper[4903]: E0128 15:58:01.258551 4903 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(c9ccd982f67c17be3b8b5f84bbdf27c21cfb298c46ec923a014eac36e3938808): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:58:01 crc kubenswrapper[4903]: E0128 15:58:01.258661 4903 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(c9ccd982f67c17be3b8b5f84bbdf27c21cfb298c46ec923a014eac36e3938808): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:01 crc kubenswrapper[4903]: E0128 15:58:01.258690 4903 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(c9ccd982f67c17be3b8b5f84bbdf27c21cfb298c46ec923a014eac36e3938808): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:01 crc kubenswrapper[4903]: E0128 15:58:01.258753 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-mv7nd_crc-storage(78107eb1-8fa0-4870-92ea-da8fc6a4eaa3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-mv7nd_crc-storage(78107eb1-8fa0-4870-92ea-da8fc6a4eaa3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mv7nd_crc-storage_78107eb1-8fa0-4870-92ea-da8fc6a4eaa3_0(c9ccd982f67c17be3b8b5f84bbdf27c21cfb298c46ec923a014eac36e3938808): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-mv7nd" podUID="78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" Jan 28 15:58:01 crc kubenswrapper[4903]: I0128 15:58:01.260192 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:58:02 crc kubenswrapper[4903]: I0128 15:58:02.214294 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:58:02 crc kubenswrapper[4903]: I0128 15:58:02.214357 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:58:02 crc kubenswrapper[4903]: I0128 15:58:02.283121 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:58:13 crc kubenswrapper[4903]: I0128 15:58:13.413227 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:13 crc kubenswrapper[4903]: I0128 15:58:13.415222 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:13 crc kubenswrapper[4903]: I0128 15:58:13.654371 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mv7nd"] Jan 28 15:58:13 crc kubenswrapper[4903]: I0128 15:58:13.659185 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:58:14 crc kubenswrapper[4903]: I0128 15:58:14.292882 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mv7nd" event={"ID":"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3","Type":"ContainerStarted","Data":"2eaa08e98325173febfc52f295f4256015786ed78fab52f2b3210fb75072265c"} Jan 28 15:58:15 crc kubenswrapper[4903]: I0128 15:58:15.301503 4903 generic.go:334] "Generic (PLEG): container finished" podID="78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" containerID="89e0e94569fdb1a2ecfdf82c46029c6cc57531549c132da9fede2bee5538c6f0" exitCode=0 Jan 28 15:58:15 crc kubenswrapper[4903]: I0128 15:58:15.301696 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mv7nd" event={"ID":"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3","Type":"ContainerDied","Data":"89e0e94569fdb1a2ecfdf82c46029c6cc57531549c132da9fede2bee5538c6f0"} Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.542294 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.703156 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-node-mnt\") pod \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.703284 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dh6k\" (UniqueName: \"kubernetes.io/projected/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-kube-api-access-7dh6k\") pod \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.703300 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" (UID: "78107eb1-8fa0-4870-92ea-da8fc6a4eaa3"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.703421 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-crc-storage\") pod \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\" (UID: \"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3\") " Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.703910 4903 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.709221 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-kube-api-access-7dh6k" (OuterVolumeSpecName: "kube-api-access-7dh6k") pod "78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" (UID: "78107eb1-8fa0-4870-92ea-da8fc6a4eaa3"). InnerVolumeSpecName "kube-api-access-7dh6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.718091 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" (UID: "78107eb1-8fa0-4870-92ea-da8fc6a4eaa3"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.804376 4903 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:16 crc kubenswrapper[4903]: I0128 15:58:16.804416 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dh6k\" (UniqueName: \"kubernetes.io/projected/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3-kube-api-access-7dh6k\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:17 crc kubenswrapper[4903]: I0128 15:58:17.314436 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mv7nd" event={"ID":"78107eb1-8fa0-4870-92ea-da8fc6a4eaa3","Type":"ContainerDied","Data":"2eaa08e98325173febfc52f295f4256015786ed78fab52f2b3210fb75072265c"} Jan 28 15:58:17 crc kubenswrapper[4903]: I0128 15:58:17.314491 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eaa08e98325173febfc52f295f4256015786ed78fab52f2b3210fb75072265c" Jan 28 15:58:17 crc kubenswrapper[4903]: I0128 15:58:17.314490 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mv7nd" Jan 28 15:58:18 crc kubenswrapper[4903]: I0128 15:58:18.936797 4903 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.635151 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j"] Jan 28 15:58:23 crc kubenswrapper[4903]: E0128 15:58:23.635720 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" containerName="storage" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.635734 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" containerName="storage" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.635847 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" containerName="storage" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.636543 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.639031 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.652713 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j"] Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.692218 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.692305 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.692374 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lxb\" (UniqueName: \"kubernetes.io/projected/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-kube-api-access-l7lxb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.793740 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.793862 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lxb\" (UniqueName: \"kubernetes.io/projected/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-kube-api-access-l7lxb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.793919 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.794224 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.794443 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.817558 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7lxb\" (UniqueName: \"kubernetes.io/projected/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-kube-api-access-l7lxb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:23 crc kubenswrapper[4903]: I0128 15:58:23.957327 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:24 crc kubenswrapper[4903]: I0128 15:58:24.168106 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j"] Jan 28 15:58:24 crc kubenswrapper[4903]: I0128 15:58:24.360858 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" event={"ID":"d4bdebca-1925-4fd0-a85f-49a9ebed9b06","Type":"ContainerStarted","Data":"a8f37ec9f351da23290e988f4db9b455385291afc64c85f2b5e8808ad0f95b60"} Jan 28 15:58:24 crc kubenswrapper[4903]: I0128 15:58:24.360942 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" event={"ID":"d4bdebca-1925-4fd0-a85f-49a9ebed9b06","Type":"ContainerStarted","Data":"fafdbb3ccb8dc8b4e3b40bd94ec9dadb6e87da8e929f53fe3f47d4de1f6c4f6e"} Jan 28 15:58:24 crc kubenswrapper[4903]: I0128 15:58:24.873164 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vn59q" Jan 28 15:58:25 crc kubenswrapper[4903]: I0128 15:58:25.369667 4903 generic.go:334] "Generic (PLEG): container finished" podID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerID="a8f37ec9f351da23290e988f4db9b455385291afc64c85f2b5e8808ad0f95b60" exitCode=0 Jan 28 15:58:25 crc kubenswrapper[4903]: I0128 15:58:25.369710 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" event={"ID":"d4bdebca-1925-4fd0-a85f-49a9ebed9b06","Type":"ContainerDied","Data":"a8f37ec9f351da23290e988f4db9b455385291afc64c85f2b5e8808ad0f95b60"} Jan 28 15:58:25 crc kubenswrapper[4903]: I0128 15:58:25.996768 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2sc6x"] Jan 28 15:58:25 crc kubenswrapper[4903]: I0128 15:58:25.997906 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.016575 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sc6x"] Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.124413 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfr6\" (UniqueName: \"kubernetes.io/projected/aa735193-59d1-4549-bc5b-7b4163a1869e-kube-api-access-2sfr6\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.124520 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-utilities\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.124657 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-catalog-content\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.225588 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfr6\" (UniqueName: \"kubernetes.io/projected/aa735193-59d1-4549-bc5b-7b4163a1869e-kube-api-access-2sfr6\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.225639 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-utilities\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.225684 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-catalog-content\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.226130 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-catalog-content\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.226353 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-utilities\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.246385 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfr6\" (UniqueName: \"kubernetes.io/projected/aa735193-59d1-4549-bc5b-7b4163a1869e-kube-api-access-2sfr6\") pod \"redhat-operators-2sc6x\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.332059 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.532983 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sc6x"] Jan 28 15:58:26 crc kubenswrapper[4903]: W0128 15:58:26.542781 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa735193_59d1_4549_bc5b_7b4163a1869e.slice/crio-7f24f3329bcffb81220170eac23ac9ff923c9f833cb562ff36f6d0183a0becd8 WatchSource:0}: Error finding container 7f24f3329bcffb81220170eac23ac9ff923c9f833cb562ff36f6d0183a0becd8: Status 404 returned error can't find the container with id 7f24f3329bcffb81220170eac23ac9ff923c9f833cb562ff36f6d0183a0becd8 Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.613987 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.614045 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.614084 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.614645 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"076145b459522bcd2bea9cb08cae4aa7b63523e3096a45977e0c8639d4b92ae4"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:58:26 crc kubenswrapper[4903]: I0128 15:58:26.614702 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://076145b459522bcd2bea9cb08cae4aa7b63523e3096a45977e0c8639d4b92ae4" gracePeriod=600 Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.382268 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="076145b459522bcd2bea9cb08cae4aa7b63523e3096a45977e0c8639d4b92ae4" exitCode=0 Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.382333 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"076145b459522bcd2bea9cb08cae4aa7b63523e3096a45977e0c8639d4b92ae4"} Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.383118 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"954d27b4dc9851fdaed58cb75beeee55d01523bc8e8b245b32b2ba4b08a3a068"} Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.383159 4903 scope.go:117] "RemoveContainer" containerID="2702c7586f05ef407a560aa20b4b1483f456a5aad5600fa168e29968e5042eb4" Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.385282 4903 generic.go:334] "Generic (PLEG): container finished" podID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerID="bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c" exitCode=0 Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.385331 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sc6x" event={"ID":"aa735193-59d1-4549-bc5b-7b4163a1869e","Type":"ContainerDied","Data":"bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c"} Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.385382 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sc6x" event={"ID":"aa735193-59d1-4549-bc5b-7b4163a1869e","Type":"ContainerStarted","Data":"7f24f3329bcffb81220170eac23ac9ff923c9f833cb562ff36f6d0183a0becd8"} Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.388093 4903 generic.go:334] "Generic (PLEG): container finished" podID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerID="0e1cf6bb24c6bfc860b74719740fa24e0ca6c91481336b94cd46e65a7bac53fd" exitCode=0 Jan 28 15:58:27 crc kubenswrapper[4903]: I0128 15:58:27.388150 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" event={"ID":"d4bdebca-1925-4fd0-a85f-49a9ebed9b06","Type":"ContainerDied","Data":"0e1cf6bb24c6bfc860b74719740fa24e0ca6c91481336b94cd46e65a7bac53fd"} Jan 28 15:58:28 crc kubenswrapper[4903]: I0128 15:58:28.396477 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sc6x" event={"ID":"aa735193-59d1-4549-bc5b-7b4163a1869e","Type":"ContainerStarted","Data":"2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837"} Jan 28 15:58:28 crc kubenswrapper[4903]: I0128 15:58:28.410197 4903 generic.go:334] "Generic (PLEG): container finished" podID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerID="83bd0a3cf7016016ecfce810629556cc02fc1294966803855f312780b4a69822" exitCode=0 Jan 28 15:58:28 crc kubenswrapper[4903]: I0128 15:58:28.410292 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" event={"ID":"d4bdebca-1925-4fd0-a85f-49a9ebed9b06","Type":"ContainerDied","Data":"83bd0a3cf7016016ecfce810629556cc02fc1294966803855f312780b4a69822"} Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.433835 4903 generic.go:334] "Generic (PLEG): container finished" podID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerID="2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837" exitCode=0 Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.434033 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sc6x" event={"ID":"aa735193-59d1-4549-bc5b-7b4163a1869e","Type":"ContainerDied","Data":"2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837"} Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.695695 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.796598 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7lxb\" (UniqueName: \"kubernetes.io/projected/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-kube-api-access-l7lxb\") pod \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.796972 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-bundle\") pod \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.797096 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-util\") pod \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\" (UID: \"d4bdebca-1925-4fd0-a85f-49a9ebed9b06\") " Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.797690 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-bundle" (OuterVolumeSpecName: "bundle") pod "d4bdebca-1925-4fd0-a85f-49a9ebed9b06" (UID: "d4bdebca-1925-4fd0-a85f-49a9ebed9b06"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.810737 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-kube-api-access-l7lxb" (OuterVolumeSpecName: "kube-api-access-l7lxb") pod "d4bdebca-1925-4fd0-a85f-49a9ebed9b06" (UID: "d4bdebca-1925-4fd0-a85f-49a9ebed9b06"). InnerVolumeSpecName "kube-api-access-l7lxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.873221 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-util" (OuterVolumeSpecName: "util") pod "d4bdebca-1925-4fd0-a85f-49a9ebed9b06" (UID: "d4bdebca-1925-4fd0-a85f-49a9ebed9b06"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.898898 4903 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.898939 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7lxb\" (UniqueName: \"kubernetes.io/projected/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-kube-api-access-l7lxb\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:29 crc kubenswrapper[4903]: I0128 15:58:29.898949 4903 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4bdebca-1925-4fd0-a85f-49a9ebed9b06-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:30 crc kubenswrapper[4903]: I0128 15:58:30.441777 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sc6x" event={"ID":"aa735193-59d1-4549-bc5b-7b4163a1869e","Type":"ContainerStarted","Data":"9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5"} Jan 28 15:58:30 crc kubenswrapper[4903]: I0128 15:58:30.446450 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" event={"ID":"d4bdebca-1925-4fd0-a85f-49a9ebed9b06","Type":"ContainerDied","Data":"fafdbb3ccb8dc8b4e3b40bd94ec9dadb6e87da8e929f53fe3f47d4de1f6c4f6e"} Jan 28 15:58:30 crc kubenswrapper[4903]: I0128 15:58:30.446482 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fafdbb3ccb8dc8b4e3b40bd94ec9dadb6e87da8e929f53fe3f47d4de1f6c4f6e" Jan 28 15:58:30 crc kubenswrapper[4903]: I0128 15:58:30.446557 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713nw89j" Jan 28 15:58:30 crc kubenswrapper[4903]: I0128 15:58:30.462516 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2sc6x" podStartSLOduration=2.958463736 podStartE2EDuration="5.462500013s" podCreationTimestamp="2026-01-28 15:58:25 +0000 UTC" firstStartedPulling="2026-01-28 15:58:27.387399599 +0000 UTC m=+779.663371110" lastFinishedPulling="2026-01-28 15:58:29.891435886 +0000 UTC m=+782.167407387" observedRunningTime="2026-01-28 15:58:30.456787897 +0000 UTC m=+782.732759428" watchObservedRunningTime="2026-01-28 15:58:30.462500013 +0000 UTC m=+782.738471524" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.925821 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r2kjt"] Jan 28 15:58:33 crc kubenswrapper[4903]: E0128 15:58:33.927637 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerName="pull" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.927719 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerName="pull" Jan 28 15:58:33 crc kubenswrapper[4903]: E0128 15:58:33.927801 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerName="util" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.927897 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerName="util" Jan 28 15:58:33 crc kubenswrapper[4903]: E0128 15:58:33.927972 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerName="extract" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.928041 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerName="extract" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.928208 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bdebca-1925-4fd0-a85f-49a9ebed9b06" containerName="extract" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.928781 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.930951 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.931295 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 15:58:33 crc kubenswrapper[4903]: I0128 15:58:33.931457 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-br628" Jan 28 15:58:34 crc kubenswrapper[4903]: I0128 15:58:34.013947 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r2kjt"] Jan 28 15:58:34 crc kubenswrapper[4903]: I0128 15:58:34.050445 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhc4q\" (UniqueName: \"kubernetes.io/projected/653a99c2-6e4a-49b6-b1c4-5fbe3460a77b-kube-api-access-zhc4q\") pod \"nmstate-operator-646758c888-r2kjt\" (UID: \"653a99c2-6e4a-49b6-b1c4-5fbe3460a77b\") " pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" Jan 28 15:58:34 crc kubenswrapper[4903]: I0128 15:58:34.151508 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc4q\" (UniqueName: \"kubernetes.io/projected/653a99c2-6e4a-49b6-b1c4-5fbe3460a77b-kube-api-access-zhc4q\") pod \"nmstate-operator-646758c888-r2kjt\" (UID: \"653a99c2-6e4a-49b6-b1c4-5fbe3460a77b\") " pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" Jan 28 15:58:34 crc kubenswrapper[4903]: I0128 15:58:34.172933 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhc4q\" (UniqueName: \"kubernetes.io/projected/653a99c2-6e4a-49b6-b1c4-5fbe3460a77b-kube-api-access-zhc4q\") pod \"nmstate-operator-646758c888-r2kjt\" (UID: \"653a99c2-6e4a-49b6-b1c4-5fbe3460a77b\") " pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" Jan 28 15:58:34 crc kubenswrapper[4903]: I0128 15:58:34.396000 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" Jan 28 15:58:34 crc kubenswrapper[4903]: I0128 15:58:34.852277 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r2kjt"] Jan 28 15:58:34 crc kubenswrapper[4903]: W0128 15:58:34.854677 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod653a99c2_6e4a_49b6_b1c4_5fbe3460a77b.slice/crio-825653f118d58530a8c085e2798cafd83c67476f87a217811f2fcb74fb99dc4d WatchSource:0}: Error finding container 825653f118d58530a8c085e2798cafd83c67476f87a217811f2fcb74fb99dc4d: Status 404 returned error can't find the container with id 825653f118d58530a8c085e2798cafd83c67476f87a217811f2fcb74fb99dc4d Jan 28 15:58:35 crc kubenswrapper[4903]: I0128 15:58:35.476444 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" event={"ID":"653a99c2-6e4a-49b6-b1c4-5fbe3460a77b","Type":"ContainerStarted","Data":"825653f118d58530a8c085e2798cafd83c67476f87a217811f2fcb74fb99dc4d"} Jan 28 15:58:36 crc kubenswrapper[4903]: I0128 15:58:36.333241 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:36 crc kubenswrapper[4903]: I0128 15:58:36.333772 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:36 crc kubenswrapper[4903]: I0128 15:58:36.368965 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:36 crc kubenswrapper[4903]: I0128 15:58:36.518291 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:38 crc kubenswrapper[4903]: I0128 15:58:38.491117 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" event={"ID":"653a99c2-6e4a-49b6-b1c4-5fbe3460a77b","Type":"ContainerStarted","Data":"1f2cae1e9f7442b9f694c32eff08caabfad88d59ecae1996f50543c9def9fc5b"} Jan 28 15:58:38 crc kubenswrapper[4903]: I0128 15:58:38.509788 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-r2kjt" podStartSLOduration=2.112394829 podStartE2EDuration="5.509768773s" podCreationTimestamp="2026-01-28 15:58:33 +0000 UTC" firstStartedPulling="2026-01-28 15:58:34.85736914 +0000 UTC m=+787.133340651" lastFinishedPulling="2026-01-28 15:58:38.254743074 +0000 UTC m=+790.530714595" observedRunningTime="2026-01-28 15:58:38.504213892 +0000 UTC m=+790.780185403" watchObservedRunningTime="2026-01-28 15:58:38.509768773 +0000 UTC m=+790.785740284" Jan 28 15:58:38 crc kubenswrapper[4903]: I0128 15:58:38.986503 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sc6x"] Jan 28 15:58:39 crc kubenswrapper[4903]: I0128 15:58:39.496291 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2sc6x" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="registry-server" containerID="cri-o://9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5" gracePeriod=2 Jan 28 15:58:39 crc kubenswrapper[4903]: I0128 15:58:39.868491 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.024249 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-utilities\") pod \"aa735193-59d1-4549-bc5b-7b4163a1869e\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.024305 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-catalog-content\") pod \"aa735193-59d1-4549-bc5b-7b4163a1869e\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.024371 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sfr6\" (UniqueName: \"kubernetes.io/projected/aa735193-59d1-4549-bc5b-7b4163a1869e-kube-api-access-2sfr6\") pod \"aa735193-59d1-4549-bc5b-7b4163a1869e\" (UID: \"aa735193-59d1-4549-bc5b-7b4163a1869e\") " Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.025499 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-utilities" (OuterVolumeSpecName: "utilities") pod "aa735193-59d1-4549-bc5b-7b4163a1869e" (UID: "aa735193-59d1-4549-bc5b-7b4163a1869e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.030916 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa735193-59d1-4549-bc5b-7b4163a1869e-kube-api-access-2sfr6" (OuterVolumeSpecName: "kube-api-access-2sfr6") pod "aa735193-59d1-4549-bc5b-7b4163a1869e" (UID: "aa735193-59d1-4549-bc5b-7b4163a1869e"). InnerVolumeSpecName "kube-api-access-2sfr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.125951 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sfr6\" (UniqueName: \"kubernetes.io/projected/aa735193-59d1-4549-bc5b-7b4163a1869e-kube-api-access-2sfr6\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.125996 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.188734 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa735193-59d1-4549-bc5b-7b4163a1869e" (UID: "aa735193-59d1-4549-bc5b-7b4163a1869e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.227271 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa735193-59d1-4549-bc5b-7b4163a1869e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.503038 4903 generic.go:334] "Generic (PLEG): container finished" podID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerID="9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5" exitCode=0 Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.503107 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sc6x" event={"ID":"aa735193-59d1-4549-bc5b-7b4163a1869e","Type":"ContainerDied","Data":"9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5"} Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.503138 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sc6x" event={"ID":"aa735193-59d1-4549-bc5b-7b4163a1869e","Type":"ContainerDied","Data":"7f24f3329bcffb81220170eac23ac9ff923c9f833cb562ff36f6d0183a0becd8"} Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.503179 4903 scope.go:117] "RemoveContainer" containerID="9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.503354 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sc6x" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.520302 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sc6x"] Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.523603 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2sc6x"] Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.525377 4903 scope.go:117] "RemoveContainer" containerID="2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.542433 4903 scope.go:117] "RemoveContainer" containerID="bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.562573 4903 scope.go:117] "RemoveContainer" containerID="9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5" Jan 28 15:58:40 crc kubenswrapper[4903]: E0128 15:58:40.562988 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5\": container with ID starting with 9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5 not found: ID does not exist" containerID="9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.563035 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5"} err="failed to get container status \"9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5\": rpc error: code = NotFound desc = could not find container \"9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5\": container with ID starting with 9c37768c2b3e11c7a803376a79457ebad73d46cb70bb6bd31658fa234eca38a5 not found: ID does not exist" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.563062 4903 scope.go:117] "RemoveContainer" containerID="2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837" Jan 28 15:58:40 crc kubenswrapper[4903]: E0128 15:58:40.563464 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837\": container with ID starting with 2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837 not found: ID does not exist" containerID="2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.563495 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837"} err="failed to get container status \"2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837\": rpc error: code = NotFound desc = could not find container \"2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837\": container with ID starting with 2d9d915b1845a1410ba8740c4f5f393e6e535ef5c166974ca28db73902825837 not found: ID does not exist" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.563519 4903 scope.go:117] "RemoveContainer" containerID="bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c" Jan 28 15:58:40 crc kubenswrapper[4903]: E0128 15:58:40.563997 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c\": container with ID starting with bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c not found: ID does not exist" containerID="bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c" Jan 28 15:58:40 crc kubenswrapper[4903]: I0128 15:58:40.564094 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c"} err="failed to get container status \"bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c\": rpc error: code = NotFound desc = could not find container \"bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c\": container with ID starting with bb96685acf7d2ccd12c0076fbf91b1be1068b119be67e7c4855b18d1cfdf757c not found: ID does not exist" Jan 28 15:58:42 crc kubenswrapper[4903]: I0128 15:58:42.422108 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" path="/var/lib/kubelet/pods/aa735193-59d1-4549-bc5b-7b4163a1869e/volumes" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.727410 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ppccm"] Jan 28 15:58:43 crc kubenswrapper[4903]: E0128 15:58:43.727905 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="extract-content" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.727917 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="extract-content" Jan 28 15:58:43 crc kubenswrapper[4903]: E0128 15:58:43.727932 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="extract-utilities" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.727937 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="extract-utilities" Jan 28 15:58:43 crc kubenswrapper[4903]: E0128 15:58:43.727947 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="registry-server" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.727954 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="registry-server" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.728044 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa735193-59d1-4549-bc5b-7b4163a1869e" containerName="registry-server" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.728569 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.733716 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dn2p2" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.740025 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ppccm"] Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.749825 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk"] Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.750609 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.752806 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.775631 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk"] Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.782799 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vm9c4"] Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.783592 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.877187 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxwzx\" (UniqueName: \"kubernetes.io/projected/428306e6-a9f6-4687-b563-d9706b03afe5-kube-api-access-zxwzx\") pod \"nmstate-webhook-8474b5b9d8-x4cbk\" (UID: \"428306e6-a9f6-4687-b563-d9706b03afe5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.877233 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khvln\" (UniqueName: \"kubernetes.io/projected/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-kube-api-access-khvln\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.877255 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhzb\" (UniqueName: \"kubernetes.io/projected/63a6d760-5906-4fb7-8625-225855777120-kube-api-access-6hhzb\") pod \"nmstate-metrics-54757c584b-ppccm\" (UID: \"63a6d760-5906-4fb7-8625-225855777120\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.877334 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-nmstate-lock\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.877376 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-ovs-socket\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.877416 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/428306e6-a9f6-4687-b563-d9706b03afe5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-x4cbk\" (UID: \"428306e6-a9f6-4687-b563-d9706b03afe5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.877437 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-dbus-socket\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.905659 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k"] Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.906289 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.916323 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.916970 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.917655 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-74qt7" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.927445 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k"] Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978601 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxwzx\" (UniqueName: \"kubernetes.io/projected/428306e6-a9f6-4687-b563-d9706b03afe5-kube-api-access-zxwzx\") pod \"nmstate-webhook-8474b5b9d8-x4cbk\" (UID: \"428306e6-a9f6-4687-b563-d9706b03afe5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978649 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khvln\" (UniqueName: \"kubernetes.io/projected/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-kube-api-access-khvln\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978667 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hhzb\" (UniqueName: \"kubernetes.io/projected/63a6d760-5906-4fb7-8625-225855777120-kube-api-access-6hhzb\") pod \"nmstate-metrics-54757c584b-ppccm\" (UID: \"63a6d760-5906-4fb7-8625-225855777120\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978706 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-nmstate-lock\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978725 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-ovs-socket\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978754 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/428306e6-a9f6-4687-b563-d9706b03afe5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-x4cbk\" (UID: \"428306e6-a9f6-4687-b563-d9706b03afe5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978768 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-dbus-socket\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978802 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-ovs-socket\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.978799 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-nmstate-lock\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: E0128 15:58:43.978886 4903 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 15:58:43 crc kubenswrapper[4903]: E0128 15:58:43.978946 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/428306e6-a9f6-4687-b563-d9706b03afe5-tls-key-pair podName:428306e6-a9f6-4687-b563-d9706b03afe5 nodeName:}" failed. No retries permitted until 2026-01-28 15:58:44.478927863 +0000 UTC m=+796.754899374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/428306e6-a9f6-4687-b563-d9706b03afe5-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-x4cbk" (UID: "428306e6-a9f6-4687-b563-d9706b03afe5") : secret "openshift-nmstate-webhook" not found Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.979053 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-dbus-socket\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.997167 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khvln\" (UniqueName: \"kubernetes.io/projected/feb33a16-d2dd-4ce9-ac94-3008e7ef694a-kube-api-access-khvln\") pod \"nmstate-handler-vm9c4\" (UID: \"feb33a16-d2dd-4ce9-ac94-3008e7ef694a\") " pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:43 crc kubenswrapper[4903]: I0128 15:58:43.997328 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxwzx\" (UniqueName: \"kubernetes.io/projected/428306e6-a9f6-4687-b563-d9706b03afe5-kube-api-access-zxwzx\") pod \"nmstate-webhook-8474b5b9d8-x4cbk\" (UID: \"428306e6-a9f6-4687-b563-d9706b03afe5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.001768 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hhzb\" (UniqueName: \"kubernetes.io/projected/63a6d760-5906-4fb7-8625-225855777120-kube-api-access-6hhzb\") pod \"nmstate-metrics-54757c584b-ppccm\" (UID: \"63a6d760-5906-4fb7-8625-225855777120\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.045270 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.080498 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8281268b-6e4d-4162-9077-5ce83548e1fd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.080936 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8281268b-6e4d-4162-9077-5ce83548e1fd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.081036 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7cnn\" (UniqueName: \"kubernetes.io/projected/8281268b-6e4d-4162-9077-5ce83548e1fd-kube-api-access-d7cnn\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.096595 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-77478c78c9-qdzg9"] Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.097422 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.105781 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.109167 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77478c78c9-qdzg9"] Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.182374 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8281268b-6e4d-4162-9077-5ce83548e1fd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.182424 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8281268b-6e4d-4162-9077-5ce83548e1fd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.182508 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7cnn\" (UniqueName: \"kubernetes.io/projected/8281268b-6e4d-4162-9077-5ce83548e1fd-kube-api-access-d7cnn\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.183989 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8281268b-6e4d-4162-9077-5ce83548e1fd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.188839 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8281268b-6e4d-4162-9077-5ce83548e1fd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.199636 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7cnn\" (UniqueName: \"kubernetes.io/projected/8281268b-6e4d-4162-9077-5ce83548e1fd-kube-api-access-d7cnn\") pod \"nmstate-console-plugin-7754f76f8b-h6d6k\" (UID: \"8281268b-6e4d-4162-9077-5ce83548e1fd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.229957 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.276927 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ppccm"] Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.284061 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnr7\" (UniqueName: \"kubernetes.io/projected/acf6f7bb-c965-431c-85fc-b253cfe86096-kube-api-access-sfnr7\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.284126 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acf6f7bb-c965-431c-85fc-b253cfe86096-console-serving-cert\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.284145 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-service-ca\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.284314 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-trusted-ca-bundle\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.284353 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acf6f7bb-c965-431c-85fc-b253cfe86096-console-oauth-config\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.284403 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-console-config\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.284456 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-oauth-serving-cert\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.385708 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acf6f7bb-c965-431c-85fc-b253cfe86096-console-serving-cert\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.386075 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-service-ca\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.386129 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-trusted-ca-bundle\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.386148 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acf6f7bb-c965-431c-85fc-b253cfe86096-console-oauth-config\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.386176 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-console-config\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.386602 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-oauth-serving-cert\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.386637 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnr7\" (UniqueName: \"kubernetes.io/projected/acf6f7bb-c965-431c-85fc-b253cfe86096-kube-api-access-sfnr7\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.386950 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-service-ca\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.387195 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-console-config\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.387355 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-trusted-ca-bundle\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.387561 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/acf6f7bb-c965-431c-85fc-b253cfe86096-oauth-serving-cert\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.390815 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/acf6f7bb-c965-431c-85fc-b253cfe86096-console-serving-cert\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.391709 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/acf6f7bb-c965-431c-85fc-b253cfe86096-console-oauth-config\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.404523 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnr7\" (UniqueName: \"kubernetes.io/projected/acf6f7bb-c965-431c-85fc-b253cfe86096-kube-api-access-sfnr7\") pod \"console-77478c78c9-qdzg9\" (UID: \"acf6f7bb-c965-431c-85fc-b253cfe86096\") " pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.412461 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k"] Jan 28 15:58:44 crc kubenswrapper[4903]: W0128 15:58:44.418565 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8281268b_6e4d_4162_9077_5ce83548e1fd.slice/crio-5f0c1724c6af0da32ab5d783955d3257dcd767972784f78456a84083571eac7f WatchSource:0}: Error finding container 5f0c1724c6af0da32ab5d783955d3257dcd767972784f78456a84083571eac7f: Status 404 returned error can't find the container with id 5f0c1724c6af0da32ab5d783955d3257dcd767972784f78456a84083571eac7f Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.460587 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.487626 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/428306e6-a9f6-4687-b563-d9706b03afe5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-x4cbk\" (UID: \"428306e6-a9f6-4687-b563-d9706b03afe5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.494624 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/428306e6-a9f6-4687-b563-d9706b03afe5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-x4cbk\" (UID: \"428306e6-a9f6-4687-b563-d9706b03afe5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.532570 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" event={"ID":"63a6d760-5906-4fb7-8625-225855777120","Type":"ContainerStarted","Data":"ec3a85e268b203fc5a7474fa8e7d3449bd22d3d6fa0b9839125ffe8990a8bd9f"} Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.533649 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" event={"ID":"8281268b-6e4d-4162-9077-5ce83548e1fd","Type":"ContainerStarted","Data":"5f0c1724c6af0da32ab5d783955d3257dcd767972784f78456a84083571eac7f"} Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.534891 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vm9c4" event={"ID":"feb33a16-d2dd-4ce9-ac94-3008e7ef694a","Type":"ContainerStarted","Data":"d5edb52b7bc1ceec06f0fecfeb657eb4bf267b4ea63d85a293b7f56083cda570"} Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.644816 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77478c78c9-qdzg9"] Jan 28 15:58:44 crc kubenswrapper[4903]: W0128 15:58:44.648896 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacf6f7bb_c965_431c_85fc_b253cfe86096.slice/crio-55c245babf7085052d33fd974f9bb626691e49295168785b15bd81c7c97aaa50 WatchSource:0}: Error finding container 55c245babf7085052d33fd974f9bb626691e49295168785b15bd81c7c97aaa50: Status 404 returned error can't find the container with id 55c245babf7085052d33fd974f9bb626691e49295168785b15bd81c7c97aaa50 Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.677242 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:44 crc kubenswrapper[4903]: I0128 15:58:44.852758 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk"] Jan 28 15:58:44 crc kubenswrapper[4903]: W0128 15:58:44.863323 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428306e6_a9f6_4687_b563_d9706b03afe5.slice/crio-104d1eec607e77ecd0b28c5fa5862275be6fe1702736229b64cf7e0ad313adf0 WatchSource:0}: Error finding container 104d1eec607e77ecd0b28c5fa5862275be6fe1702736229b64cf7e0ad313adf0: Status 404 returned error can't find the container with id 104d1eec607e77ecd0b28c5fa5862275be6fe1702736229b64cf7e0ad313adf0 Jan 28 15:58:45 crc kubenswrapper[4903]: I0128 15:58:45.544398 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77478c78c9-qdzg9" event={"ID":"acf6f7bb-c965-431c-85fc-b253cfe86096","Type":"ContainerStarted","Data":"bd17a18a32858a1a9de585d86b781ec68b1b43a1f659012ab3b87c118c2ea08a"} Jan 28 15:58:45 crc kubenswrapper[4903]: I0128 15:58:45.544727 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77478c78c9-qdzg9" event={"ID":"acf6f7bb-c965-431c-85fc-b253cfe86096","Type":"ContainerStarted","Data":"55c245babf7085052d33fd974f9bb626691e49295168785b15bd81c7c97aaa50"} Jan 28 15:58:45 crc kubenswrapper[4903]: I0128 15:58:45.545542 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" event={"ID":"428306e6-a9f6-4687-b563-d9706b03afe5","Type":"ContainerStarted","Data":"104d1eec607e77ecd0b28c5fa5862275be6fe1702736229b64cf7e0ad313adf0"} Jan 28 15:58:45 crc kubenswrapper[4903]: I0128 15:58:45.562820 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-77478c78c9-qdzg9" podStartSLOduration=1.5628022609999999 podStartE2EDuration="1.562802261s" podCreationTimestamp="2026-01-28 15:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:58:45.561326591 +0000 UTC m=+797.837298122" watchObservedRunningTime="2026-01-28 15:58:45.562802261 +0000 UTC m=+797.838773772" Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.566201 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" event={"ID":"63a6d760-5906-4fb7-8625-225855777120","Type":"ContainerStarted","Data":"a33f517d1db8fbb41e8dfbeb77043331cf35182cb2db58930b9609f5d180936c"} Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.568445 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" event={"ID":"8281268b-6e4d-4162-9077-5ce83548e1fd","Type":"ContainerStarted","Data":"2dac463c72f4f43e1d7066b3ca86804bc8794de39ee92686f232f9e11e702e0c"} Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.570049 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" event={"ID":"428306e6-a9f6-4687-b563-d9706b03afe5","Type":"ContainerStarted","Data":"c70790247c9aaeb5d8a80c449ad865a8e951957178015d833ed6d50e75a56fc9"} Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.570296 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.571890 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vm9c4" event={"ID":"feb33a16-d2dd-4ce9-ac94-3008e7ef694a","Type":"ContainerStarted","Data":"421a01f94c19f170a5890826cffdaf5f24f6db828bf3c8d327416ea981f32561"} Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.572118 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.589407 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-h6d6k" podStartSLOduration=2.05601577 podStartE2EDuration="4.58938772s" podCreationTimestamp="2026-01-28 15:58:43 +0000 UTC" firstStartedPulling="2026-01-28 15:58:44.420596334 +0000 UTC m=+796.696567845" lastFinishedPulling="2026-01-28 15:58:46.953968244 +0000 UTC m=+799.229939795" observedRunningTime="2026-01-28 15:58:47.5872104 +0000 UTC m=+799.863181911" watchObservedRunningTime="2026-01-28 15:58:47.58938772 +0000 UTC m=+799.865359231" Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.606085 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" podStartSLOduration=2.492402358 podStartE2EDuration="4.606068175s" podCreationTimestamp="2026-01-28 15:58:43 +0000 UTC" firstStartedPulling="2026-01-28 15:58:44.866551483 +0000 UTC m=+797.142522994" lastFinishedPulling="2026-01-28 15:58:46.9802173 +0000 UTC m=+799.256188811" observedRunningTime="2026-01-28 15:58:47.605040828 +0000 UTC m=+799.881012339" watchObservedRunningTime="2026-01-28 15:58:47.606068175 +0000 UTC m=+799.882039686" Jan 28 15:58:47 crc kubenswrapper[4903]: I0128 15:58:47.628921 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vm9c4" podStartSLOduration=1.807112609 podStartE2EDuration="4.62890208s" podCreationTimestamp="2026-01-28 15:58:43 +0000 UTC" firstStartedPulling="2026-01-28 15:58:44.151031937 +0000 UTC m=+796.427003448" lastFinishedPulling="2026-01-28 15:58:46.972821408 +0000 UTC m=+799.248792919" observedRunningTime="2026-01-28 15:58:47.626407062 +0000 UTC m=+799.902378603" watchObservedRunningTime="2026-01-28 15:58:47.62890208 +0000 UTC m=+799.904873611" Jan 28 15:58:50 crc kubenswrapper[4903]: I0128 15:58:50.593954 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" event={"ID":"63a6d760-5906-4fb7-8625-225855777120","Type":"ContainerStarted","Data":"b6c5e4090188a35d33b6d235b34a3cf196e75e6914e5a55970a3bc622b853ace"} Jan 28 15:58:50 crc kubenswrapper[4903]: I0128 15:58:50.617388 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ppccm" podStartSLOduration=1.6533266549999999 podStartE2EDuration="7.617367447s" podCreationTimestamp="2026-01-28 15:58:43 +0000 UTC" firstStartedPulling="2026-01-28 15:58:44.288880044 +0000 UTC m=+796.564851555" lastFinishedPulling="2026-01-28 15:58:50.252920836 +0000 UTC m=+802.528892347" observedRunningTime="2026-01-28 15:58:50.609360578 +0000 UTC m=+802.885332089" watchObservedRunningTime="2026-01-28 15:58:50.617367447 +0000 UTC m=+802.893338958" Jan 28 15:58:54 crc kubenswrapper[4903]: I0128 15:58:54.132472 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vm9c4" Jan 28 15:58:54 crc kubenswrapper[4903]: I0128 15:58:54.460814 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:54 crc kubenswrapper[4903]: I0128 15:58:54.461446 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:54 crc kubenswrapper[4903]: I0128 15:58:54.468871 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:54 crc kubenswrapper[4903]: I0128 15:58:54.632053 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-77478c78c9-qdzg9" Jan 28 15:58:54 crc kubenswrapper[4903]: I0128 15:58:54.702401 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-522t5"] Jan 28 15:59:04 crc kubenswrapper[4903]: I0128 15:59:04.687398 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-x4cbk" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.690136 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn"] Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.691587 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.694437 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.713081 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn"] Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.755160 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.755264 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.755308 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlc55\" (UniqueName: \"kubernetes.io/projected/11e2e603-b3aa-483c-927b-3ea1d34891a0-kube-api-access-dlc55\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.856807 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.856862 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.856886 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlc55\" (UniqueName: \"kubernetes.io/projected/11e2e603-b3aa-483c-927b-3ea1d34891a0-kube-api-access-dlc55\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.857272 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.857326 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:18 crc kubenswrapper[4903]: I0128 15:59:18.873627 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlc55\" (UniqueName: \"kubernetes.io/projected/11e2e603-b3aa-483c-927b-3ea1d34891a0-kube-api-access-dlc55\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:19 crc kubenswrapper[4903]: I0128 15:59:19.015049 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:19 crc kubenswrapper[4903]: I0128 15:59:19.482396 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn"] Jan 28 15:59:19 crc kubenswrapper[4903]: I0128 15:59:19.766133 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-522t5" podUID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" containerName="console" containerID="cri-o://b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442" gracePeriod=15 Jan 28 15:59:19 crc kubenswrapper[4903]: I0128 15:59:19.788421 4903 generic.go:334] "Generic (PLEG): container finished" podID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerID="7743fc94d61787c52bea3a2b68d0968721da2515bcb498c34b852e627266aaaf" exitCode=0 Jan 28 15:59:19 crc kubenswrapper[4903]: I0128 15:59:19.788463 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" event={"ID":"11e2e603-b3aa-483c-927b-3ea1d34891a0","Type":"ContainerDied","Data":"7743fc94d61787c52bea3a2b68d0968721da2515bcb498c34b852e627266aaaf"} Jan 28 15:59:19 crc kubenswrapper[4903]: I0128 15:59:19.788492 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" event={"ID":"11e2e603-b3aa-483c-927b-3ea1d34891a0","Type":"ContainerStarted","Data":"b5bd4d1e9edb6a461d12d9a977cd8a4c71e39d4f692f858387a8d98b0d6c7562"} Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.118872 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-522t5_cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1/console/0.log" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.119087 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.271788 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-service-ca\") pod \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.271890 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-oauth-serving-cert\") pod \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.271968 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-serving-cert\") pod \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.272007 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-trusted-ca-bundle\") pod \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.272080 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-oauth-config\") pod \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.272128 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-kube-api-access-6kkgj\") pod \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.272154 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-config\") pod \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\" (UID: \"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1\") " Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.272964 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" (UID: "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.273009 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-config" (OuterVolumeSpecName: "console-config") pod "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" (UID: "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.273031 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" (UID: "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.273071 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-service-ca" (OuterVolumeSpecName: "service-ca") pod "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" (UID: "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.278225 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" (UID: "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.279036 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" (UID: "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.284277 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-kube-api-access-6kkgj" (OuterVolumeSpecName: "kube-api-access-6kkgj") pod "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" (UID: "cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1"). InnerVolumeSpecName "kube-api-access-6kkgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.373370 4903 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.373425 4903 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.373448 4903 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.373468 4903 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.373489 4903 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.373508 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kkgj\" (UniqueName: \"kubernetes.io/projected/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-kube-api-access-6kkgj\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.373533 4903 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.799688 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-522t5_cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1/console/0.log" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.799770 4903 generic.go:334] "Generic (PLEG): container finished" podID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" containerID="b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442" exitCode=2 Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.799821 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-522t5" event={"ID":"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1","Type":"ContainerDied","Data":"b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442"} Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.799861 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-522t5" event={"ID":"cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1","Type":"ContainerDied","Data":"ae869a673bd64e3fa482272fc8392d191a8386a216c291adeea238a183176dc8"} Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.799888 4903 scope.go:117] "RemoveContainer" containerID="b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.800088 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-522t5" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.826482 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-522t5"] Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.830210 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-522t5"] Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.832279 4903 scope.go:117] "RemoveContainer" containerID="b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442" Jan 28 15:59:20 crc kubenswrapper[4903]: E0128 15:59:20.833032 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442\": container with ID starting with b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442 not found: ID does not exist" containerID="b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442" Jan 28 15:59:20 crc kubenswrapper[4903]: I0128 15:59:20.833072 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442"} err="failed to get container status \"b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442\": rpc error: code = NotFound desc = could not find container \"b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442\": container with ID starting with b43e9acb3e898fcff6d493df2332cef7e053b71e781ddaa37b0327b3ed722442 not found: ID does not exist" Jan 28 15:59:21 crc kubenswrapper[4903]: I0128 15:59:21.806817 4903 generic.go:334] "Generic (PLEG): container finished" podID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerID="5e595c8968305a7bb45b5e34b32a738460826776e3e364f5150a483900406374" exitCode=0 Jan 28 15:59:21 crc kubenswrapper[4903]: I0128 15:59:21.806908 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" event={"ID":"11e2e603-b3aa-483c-927b-3ea1d34891a0","Type":"ContainerDied","Data":"5e595c8968305a7bb45b5e34b32a738460826776e3e364f5150a483900406374"} Jan 28 15:59:22 crc kubenswrapper[4903]: I0128 15:59:22.423493 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" path="/var/lib/kubelet/pods/cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1/volumes" Jan 28 15:59:22 crc kubenswrapper[4903]: I0128 15:59:22.819628 4903 generic.go:334] "Generic (PLEG): container finished" podID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerID="100d08dc5922ceeef98d3845c8dc06a9e9c87e62052de3449359755c07518f3a" exitCode=0 Jan 28 15:59:22 crc kubenswrapper[4903]: I0128 15:59:22.819717 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" event={"ID":"11e2e603-b3aa-483c-927b-3ea1d34891a0","Type":"ContainerDied","Data":"100d08dc5922ceeef98d3845c8dc06a9e9c87e62052de3449359755c07518f3a"} Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.068308 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.234587 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-util\") pod \"11e2e603-b3aa-483c-927b-3ea1d34891a0\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.234713 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlc55\" (UniqueName: \"kubernetes.io/projected/11e2e603-b3aa-483c-927b-3ea1d34891a0-kube-api-access-dlc55\") pod \"11e2e603-b3aa-483c-927b-3ea1d34891a0\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.234795 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-bundle\") pod \"11e2e603-b3aa-483c-927b-3ea1d34891a0\" (UID: \"11e2e603-b3aa-483c-927b-3ea1d34891a0\") " Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.237359 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-bundle" (OuterVolumeSpecName: "bundle") pod "11e2e603-b3aa-483c-927b-3ea1d34891a0" (UID: "11e2e603-b3aa-483c-927b-3ea1d34891a0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.242195 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e2e603-b3aa-483c-927b-3ea1d34891a0-kube-api-access-dlc55" (OuterVolumeSpecName: "kube-api-access-dlc55") pod "11e2e603-b3aa-483c-927b-3ea1d34891a0" (UID: "11e2e603-b3aa-483c-927b-3ea1d34891a0"). InnerVolumeSpecName "kube-api-access-dlc55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.267789 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-util" (OuterVolumeSpecName: "util") pod "11e2e603-b3aa-483c-927b-3ea1d34891a0" (UID: "11e2e603-b3aa-483c-927b-3ea1d34891a0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.336402 4903 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.336436 4903 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e2e603-b3aa-483c-927b-3ea1d34891a0-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.336446 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlc55\" (UniqueName: \"kubernetes.io/projected/11e2e603-b3aa-483c-927b-3ea1d34891a0-kube-api-access-dlc55\") on node \"crc\" DevicePath \"\"" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.831989 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" event={"ID":"11e2e603-b3aa-483c-927b-3ea1d34891a0","Type":"ContainerDied","Data":"b5bd4d1e9edb6a461d12d9a977cd8a4c71e39d4f692f858387a8d98b0d6c7562"} Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.832402 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5bd4d1e9edb6a461d12d9a977cd8a4c71e39d4f692f858387a8d98b0d6c7562" Jan 28 15:59:24 crc kubenswrapper[4903]: I0128 15:59:24.832138 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7q5xn" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.566982 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2"] Jan 28 15:59:33 crc kubenswrapper[4903]: E0128 15:59:33.567796 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerName="pull" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.567815 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerName="pull" Jan 28 15:59:33 crc kubenswrapper[4903]: E0128 15:59:33.567829 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerName="extract" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.567836 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerName="extract" Jan 28 15:59:33 crc kubenswrapper[4903]: E0128 15:59:33.567846 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" containerName="console" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.567854 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" containerName="console" Jan 28 15:59:33 crc kubenswrapper[4903]: E0128 15:59:33.567872 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerName="util" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.567879 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerName="util" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.567995 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd66bb6e-6bd5-41be-8492-f3e6ba7ca5a1" containerName="console" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.568010 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e2e603-b3aa-483c-927b-3ea1d34891a0" containerName="extract" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.568484 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.573099 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.573172 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.573191 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.574084 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nqh9p" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.581155 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.586425 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2"] Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.616328 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfzww\" (UniqueName: \"kubernetes.io/projected/8ddbac26-b0eb-4836-8d86-0a75c96ae111-kube-api-access-qfzww\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.616377 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ddbac26-b0eb-4836-8d86-0a75c96ae111-webhook-cert\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.616433 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ddbac26-b0eb-4836-8d86-0a75c96ae111-apiservice-cert\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.717460 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ddbac26-b0eb-4836-8d86-0a75c96ae111-apiservice-cert\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.717939 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfzww\" (UniqueName: \"kubernetes.io/projected/8ddbac26-b0eb-4836-8d86-0a75c96ae111-kube-api-access-qfzww\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.717997 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ddbac26-b0eb-4836-8d86-0a75c96ae111-webhook-cert\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.726298 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ddbac26-b0eb-4836-8d86-0a75c96ae111-apiservice-cert\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.730147 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ddbac26-b0eb-4836-8d86-0a75c96ae111-webhook-cert\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.745240 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfzww\" (UniqueName: \"kubernetes.io/projected/8ddbac26-b0eb-4836-8d86-0a75c96ae111-kube-api-access-qfzww\") pod \"metallb-operator-controller-manager-8576b765b5-pxrj2\" (UID: \"8ddbac26-b0eb-4836-8d86-0a75c96ae111\") " pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.877862 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt"] Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.878721 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.881243 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.884915 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.885416 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-m469j" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.888967 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.921179 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a8892a-58bd-4fac-beed-f999de296d3f-webhook-cert\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.921262 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqrvg\" (UniqueName: \"kubernetes.io/projected/33a8892a-58bd-4fac-beed-f999de296d3f-kube-api-access-kqrvg\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.921315 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a8892a-58bd-4fac-beed-f999de296d3f-apiservice-cert\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:33 crc kubenswrapper[4903]: I0128 15:59:33.941701 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt"] Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.022429 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a8892a-58bd-4fac-beed-f999de296d3f-webhook-cert\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.022514 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqrvg\" (UniqueName: \"kubernetes.io/projected/33a8892a-58bd-4fac-beed-f999de296d3f-kube-api-access-kqrvg\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.022598 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a8892a-58bd-4fac-beed-f999de296d3f-apiservice-cert\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.028282 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/33a8892a-58bd-4fac-beed-f999de296d3f-apiservice-cert\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.040679 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/33a8892a-58bd-4fac-beed-f999de296d3f-webhook-cert\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.058763 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqrvg\" (UniqueName: \"kubernetes.io/projected/33a8892a-58bd-4fac-beed-f999de296d3f-kube-api-access-kqrvg\") pod \"metallb-operator-webhook-server-575cd674c6-66vqt\" (UID: \"33a8892a-58bd-4fac-beed-f999de296d3f\") " pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.196745 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.259256 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2"] Jan 28 15:59:34 crc kubenswrapper[4903]: W0128 15:59:34.272851 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ddbac26_b0eb_4836_8d86_0a75c96ae111.slice/crio-ffdc2572647d30de60850b293633e4498534053cb7065c23fdd6139a02493f2d WatchSource:0}: Error finding container ffdc2572647d30de60850b293633e4498534053cb7065c23fdd6139a02493f2d: Status 404 returned error can't find the container with id ffdc2572647d30de60850b293633e4498534053cb7065c23fdd6139a02493f2d Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.436232 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt"] Jan 28 15:59:34 crc kubenswrapper[4903]: W0128 15:59:34.443917 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a8892a_58bd_4fac_beed_f999de296d3f.slice/crio-beda270150d2c4125c883fcd02b657e812d131b031822e2628d49f69bba5ca11 WatchSource:0}: Error finding container beda270150d2c4125c883fcd02b657e812d131b031822e2628d49f69bba5ca11: Status 404 returned error can't find the container with id beda270150d2c4125c883fcd02b657e812d131b031822e2628d49f69bba5ca11 Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.892890 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" event={"ID":"33a8892a-58bd-4fac-beed-f999de296d3f","Type":"ContainerStarted","Data":"beda270150d2c4125c883fcd02b657e812d131b031822e2628d49f69bba5ca11"} Jan 28 15:59:34 crc kubenswrapper[4903]: I0128 15:59:34.894313 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" event={"ID":"8ddbac26-b0eb-4836-8d86-0a75c96ae111","Type":"ContainerStarted","Data":"ffdc2572647d30de60850b293633e4498534053cb7065c23fdd6139a02493f2d"} Jan 28 15:59:38 crc kubenswrapper[4903]: I0128 15:59:38.921587 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" event={"ID":"8ddbac26-b0eb-4836-8d86-0a75c96ae111","Type":"ContainerStarted","Data":"d34127d57318c29333e9065eabbb528b4151ad3f6a4b3bd8e05c775bcce23c64"} Jan 28 15:59:38 crc kubenswrapper[4903]: I0128 15:59:38.922237 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 15:59:38 crc kubenswrapper[4903]: I0128 15:59:38.924206 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" event={"ID":"33a8892a-58bd-4fac-beed-f999de296d3f","Type":"ContainerStarted","Data":"2101583f3af32c883a7861ab8991fb26644272a315551df4d3fabbd1f4d88d8b"} Jan 28 15:59:38 crc kubenswrapper[4903]: I0128 15:59:38.924791 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 15:59:38 crc kubenswrapper[4903]: I0128 15:59:38.944566 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" podStartSLOduration=1.714382914 podStartE2EDuration="5.944528287s" podCreationTimestamp="2026-01-28 15:59:33 +0000 UTC" firstStartedPulling="2026-01-28 15:59:34.275757665 +0000 UTC m=+846.551729186" lastFinishedPulling="2026-01-28 15:59:38.505903048 +0000 UTC m=+850.781874559" observedRunningTime="2026-01-28 15:59:38.939466458 +0000 UTC m=+851.215437969" watchObservedRunningTime="2026-01-28 15:59:38.944528287 +0000 UTC m=+851.220515988" Jan 28 15:59:38 crc kubenswrapper[4903]: I0128 15:59:38.961090 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" podStartSLOduration=1.883563046 podStartE2EDuration="5.961069219s" podCreationTimestamp="2026-01-28 15:59:33 +0000 UTC" firstStartedPulling="2026-01-28 15:59:34.446585711 +0000 UTC m=+846.722557222" lastFinishedPulling="2026-01-28 15:59:38.524091884 +0000 UTC m=+850.800063395" observedRunningTime="2026-01-28 15:59:38.957958204 +0000 UTC m=+851.233929785" watchObservedRunningTime="2026-01-28 15:59:38.961069219 +0000 UTC m=+851.237040730" Jan 28 15:59:54 crc kubenswrapper[4903]: I0128 15:59:54.201649 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-575cd674c6-66vqt" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.151355 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258"] Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.155241 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.157612 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.158090 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.165158 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258"] Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.181888 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9k7\" (UniqueName: \"kubernetes.io/projected/5866143c-b9f1-4789-b270-00769269e4a1-kube-api-access-4g9k7\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.181972 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5866143c-b9f1-4789-b270-00769269e4a1-config-volume\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.182027 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5866143c-b9f1-4789-b270-00769269e4a1-secret-volume\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.283341 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5866143c-b9f1-4789-b270-00769269e4a1-secret-volume\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.283454 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9k7\" (UniqueName: \"kubernetes.io/projected/5866143c-b9f1-4789-b270-00769269e4a1-kube-api-access-4g9k7\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.283479 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5866143c-b9f1-4789-b270-00769269e4a1-config-volume\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.284419 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5866143c-b9f1-4789-b270-00769269e4a1-config-volume\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.290073 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5866143c-b9f1-4789-b270-00769269e4a1-secret-volume\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.302236 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9k7\" (UniqueName: \"kubernetes.io/projected/5866143c-b9f1-4789-b270-00769269e4a1-kube-api-access-4g9k7\") pod \"collect-profiles-29493600-fk258\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.474483 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:00 crc kubenswrapper[4903]: I0128 16:00:00.864751 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258"] Jan 28 16:00:00 crc kubenswrapper[4903]: W0128 16:00:00.876363 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5866143c_b9f1_4789_b270_00769269e4a1.slice/crio-6493085aa82e7d852319be7a76238b385523e942df348080e61a42ac6f0527f5 WatchSource:0}: Error finding container 6493085aa82e7d852319be7a76238b385523e942df348080e61a42ac6f0527f5: Status 404 returned error can't find the container with id 6493085aa82e7d852319be7a76238b385523e942df348080e61a42ac6f0527f5 Jan 28 16:00:01 crc kubenswrapper[4903]: I0128 16:00:01.031772 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" event={"ID":"5866143c-b9f1-4789-b270-00769269e4a1","Type":"ContainerStarted","Data":"6493085aa82e7d852319be7a76238b385523e942df348080e61a42ac6f0527f5"} Jan 28 16:00:02 crc kubenswrapper[4903]: I0128 16:00:02.040260 4903 generic.go:334] "Generic (PLEG): container finished" podID="5866143c-b9f1-4789-b270-00769269e4a1" containerID="ea52012ea53daa00e69457f556a103ef3f4f23481d34c6acdc4f501970dd5ba6" exitCode=0 Jan 28 16:00:02 crc kubenswrapper[4903]: I0128 16:00:02.040380 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" event={"ID":"5866143c-b9f1-4789-b270-00769269e4a1","Type":"ContainerDied","Data":"ea52012ea53daa00e69457f556a103ef3f4f23481d34c6acdc4f501970dd5ba6"} Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.265473 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.325773 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9k7\" (UniqueName: \"kubernetes.io/projected/5866143c-b9f1-4789-b270-00769269e4a1-kube-api-access-4g9k7\") pod \"5866143c-b9f1-4789-b270-00769269e4a1\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.325849 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5866143c-b9f1-4789-b270-00769269e4a1-config-volume\") pod \"5866143c-b9f1-4789-b270-00769269e4a1\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.325918 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5866143c-b9f1-4789-b270-00769269e4a1-secret-volume\") pod \"5866143c-b9f1-4789-b270-00769269e4a1\" (UID: \"5866143c-b9f1-4789-b270-00769269e4a1\") " Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.326724 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5866143c-b9f1-4789-b270-00769269e4a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "5866143c-b9f1-4789-b270-00769269e4a1" (UID: "5866143c-b9f1-4789-b270-00769269e4a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.331425 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5866143c-b9f1-4789-b270-00769269e4a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5866143c-b9f1-4789-b270-00769269e4a1" (UID: "5866143c-b9f1-4789-b270-00769269e4a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.331735 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5866143c-b9f1-4789-b270-00769269e4a1-kube-api-access-4g9k7" (OuterVolumeSpecName: "kube-api-access-4g9k7") pod "5866143c-b9f1-4789-b270-00769269e4a1" (UID: "5866143c-b9f1-4789-b270-00769269e4a1"). InnerVolumeSpecName "kube-api-access-4g9k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.426939 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g9k7\" (UniqueName: \"kubernetes.io/projected/5866143c-b9f1-4789-b270-00769269e4a1-kube-api-access-4g9k7\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.426967 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5866143c-b9f1-4789-b270-00769269e4a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4903]: I0128 16:00:03.426976 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5866143c-b9f1-4789-b270-00769269e4a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:04 crc kubenswrapper[4903]: I0128 16:00:04.054617 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" event={"ID":"5866143c-b9f1-4789-b270-00769269e4a1","Type":"ContainerDied","Data":"6493085aa82e7d852319be7a76238b385523e942df348080e61a42ac6f0527f5"} Jan 28 16:00:04 crc kubenswrapper[4903]: I0128 16:00:04.054966 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6493085aa82e7d852319be7a76238b385523e942df348080e61a42ac6f0527f5" Jan 28 16:00:04 crc kubenswrapper[4903]: I0128 16:00:04.054708 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258" Jan 28 16:00:13 crc kubenswrapper[4903]: I0128 16:00:13.888078 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8576b765b5-pxrj2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.619327 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz"] Jan 28 16:00:14 crc kubenswrapper[4903]: E0128 16:00:14.619665 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5866143c-b9f1-4789-b270-00769269e4a1" containerName="collect-profiles" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.619691 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5866143c-b9f1-4789-b270-00769269e4a1" containerName="collect-profiles" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.619881 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="5866143c-b9f1-4789-b270-00769269e4a1" containerName="collect-profiles" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.620446 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.625076 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tr78t" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.637462 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.642586 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-769z8"] Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.645108 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.647909 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.649675 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.656589 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz"] Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.681032 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmkn4\" (UniqueName: \"kubernetes.io/projected/a59793c9-95fe-448d-999b-48f9e9f868c4-kube-api-access-kmkn4\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.681104 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xx4\" (UniqueName: \"kubernetes.io/projected/a5693cbe-40f0-4201-b041-de2c16d0d036-kube-api-access-75xx4\") pod \"frr-k8s-webhook-server-7df86c4f6c-ll2pz\" (UID: \"a5693cbe-40f0-4201-b041-de2c16d0d036\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.681151 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-startup\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.681182 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-sockets\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.681242 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-conf\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.681269 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-metrics\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.682084 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5693cbe-40f0-4201-b041-de2c16d0d036-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ll2pz\" (UID: \"a5693cbe-40f0-4201-b041-de2c16d0d036\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.682133 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-reloader\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.682184 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a59793c9-95fe-448d-999b-48f9e9f868c4-metrics-certs\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.733387 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2hv4p"] Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.734458 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.752889 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.753433 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.753755 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vtz47" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.754143 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.769438 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-z6rb2"] Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.770596 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.780748 4903 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.787976 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmkn4\" (UniqueName: \"kubernetes.io/projected/a59793c9-95fe-448d-999b-48f9e9f868c4-kube-api-access-kmkn4\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788018 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xx4\" (UniqueName: \"kubernetes.io/projected/a5693cbe-40f0-4201-b041-de2c16d0d036-kube-api-access-75xx4\") pod \"frr-k8s-webhook-server-7df86c4f6c-ll2pz\" (UID: \"a5693cbe-40f0-4201-b041-de2c16d0d036\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788042 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0d21f1ab-8818-44fd-b525-1b2319a775a1-metallb-excludel2\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788071 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-startup\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788086 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-metrics-certs\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788109 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-sockets\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788130 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-conf\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788147 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-metrics\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788163 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5693cbe-40f0-4201-b041-de2c16d0d036-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ll2pz\" (UID: \"a5693cbe-40f0-4201-b041-de2c16d0d036\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788179 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-reloader\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788193 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57dp\" (UniqueName: \"kubernetes.io/projected/0d21f1ab-8818-44fd-b525-1b2319a775a1-kube-api-access-j57dp\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a59793c9-95fe-448d-999b-48f9e9f868c4-metrics-certs\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.788231 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.789483 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-startup\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.789973 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-metrics\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: E0128 16:00:14.790034 4903 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 28 16:00:14 crc kubenswrapper[4903]: E0128 16:00:14.790118 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5693cbe-40f0-4201-b041-de2c16d0d036-cert podName:a5693cbe-40f0-4201-b041-de2c16d0d036 nodeName:}" failed. No retries permitted until 2026-01-28 16:00:15.290095205 +0000 UTC m=+887.566066716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a5693cbe-40f0-4201-b041-de2c16d0d036-cert") pod "frr-k8s-webhook-server-7df86c4f6c-ll2pz" (UID: "a5693cbe-40f0-4201-b041-de2c16d0d036") : secret "frr-k8s-webhook-server-cert" not found Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.790148 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-sockets\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.790308 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-frr-conf\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.790479 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a59793c9-95fe-448d-999b-48f9e9f868c4-reloader\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.798462 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-z6rb2"] Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.799610 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a59793c9-95fe-448d-999b-48f9e9f868c4-metrics-certs\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.819385 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmkn4\" (UniqueName: \"kubernetes.io/projected/a59793c9-95fe-448d-999b-48f9e9f868c4-kube-api-access-kmkn4\") pod \"frr-k8s-769z8\" (UID: \"a59793c9-95fe-448d-999b-48f9e9f868c4\") " pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.822154 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xx4\" (UniqueName: \"kubernetes.io/projected/a5693cbe-40f0-4201-b041-de2c16d0d036-kube-api-access-75xx4\") pod \"frr-k8s-webhook-server-7df86c4f6c-ll2pz\" (UID: \"a5693cbe-40f0-4201-b041-de2c16d0d036\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.890201 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.891447 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0d21f1ab-8818-44fd-b525-1b2319a775a1-metallb-excludel2\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.892365 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-metrics-certs\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.892906 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-metrics-certs\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.893013 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-cert\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.893128 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64w6\" (UniqueName: \"kubernetes.io/projected/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-kube-api-access-f64w6\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: E0128 16:00:14.891154 4903 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 16:00:14 crc kubenswrapper[4903]: E0128 16:00:14.893332 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist podName:0d21f1ab-8818-44fd-b525-1b2319a775a1 nodeName:}" failed. No retries permitted until 2026-01-28 16:00:15.393301574 +0000 UTC m=+887.669273165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist") pod "speaker-2hv4p" (UID: "0d21f1ab-8818-44fd-b525-1b2319a775a1") : secret "metallb-memberlist" not found Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.892297 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0d21f1ab-8818-44fd-b525-1b2319a775a1-metallb-excludel2\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.893242 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57dp\" (UniqueName: \"kubernetes.io/projected/0d21f1ab-8818-44fd-b525-1b2319a775a1-kube-api-access-j57dp\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.896294 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-metrics-certs\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.922046 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57dp\" (UniqueName: \"kubernetes.io/projected/0d21f1ab-8818-44fd-b525-1b2319a775a1-kube-api-access-j57dp\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.965619 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.994842 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64w6\" (UniqueName: \"kubernetes.io/projected/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-kube-api-access-f64w6\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.994985 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-metrics-certs\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.995015 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-cert\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.997939 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-cert\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:14 crc kubenswrapper[4903]: I0128 16:00:14.998702 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-metrics-certs\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.011605 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64w6\" (UniqueName: \"kubernetes.io/projected/ec9c33ed-bd09-4039-a2a4-79213dd84bc4-kube-api-access-f64w6\") pod \"controller-6968d8fdc4-z6rb2\" (UID: \"ec9c33ed-bd09-4039-a2a4-79213dd84bc4\") " pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.118373 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerStarted","Data":"5a58d5d371110046c456cf0591a1077e66fcfbc93187d3f9b8aaf49f1696ea7f"} Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.157824 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.298646 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5693cbe-40f0-4201-b041-de2c16d0d036-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ll2pz\" (UID: \"a5693cbe-40f0-4201-b041-de2c16d0d036\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.302113 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a5693cbe-40f0-4201-b041-de2c16d0d036-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ll2pz\" (UID: \"a5693cbe-40f0-4201-b041-de2c16d0d036\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.345878 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-z6rb2"] Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.400582 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:15 crc kubenswrapper[4903]: E0128 16:00:15.400795 4903 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 16:00:15 crc kubenswrapper[4903]: E0128 16:00:15.400878 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist podName:0d21f1ab-8818-44fd-b525-1b2319a775a1 nodeName:}" failed. No retries permitted until 2026-01-28 16:00:16.400858246 +0000 UTC m=+888.676829757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist") pod "speaker-2hv4p" (UID: "0d21f1ab-8818-44fd-b525-1b2319a775a1") : secret "metallb-memberlist" not found Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.543051 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:15 crc kubenswrapper[4903]: I0128 16:00:15.741707 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz"] Jan 28 16:00:15 crc kubenswrapper[4903]: W0128 16:00:15.748449 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5693cbe_40f0_4201_b041_de2c16d0d036.slice/crio-e83f094f7054fb90c8bdb59a987d53d1ae1219430cccf5cb99cdc0d8e8cf3413 WatchSource:0}: Error finding container e83f094f7054fb90c8bdb59a987d53d1ae1219430cccf5cb99cdc0d8e8cf3413: Status 404 returned error can't find the container with id e83f094f7054fb90c8bdb59a987d53d1ae1219430cccf5cb99cdc0d8e8cf3413 Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.124909 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z6rb2" event={"ID":"ec9c33ed-bd09-4039-a2a4-79213dd84bc4","Type":"ContainerStarted","Data":"30a975c5013d55d083628401c2212f453d0657a62b54e39a50022994acf4bdfd"} Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.124948 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z6rb2" event={"ID":"ec9c33ed-bd09-4039-a2a4-79213dd84bc4","Type":"ContainerStarted","Data":"2a3f3d1ddb878a5160c36ef698981da15c1877e2fe898a8ddb4afe43f4d2231b"} Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.124960 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-z6rb2" event={"ID":"ec9c33ed-bd09-4039-a2a4-79213dd84bc4","Type":"ContainerStarted","Data":"fe8455f9f762b52b29357eb028012797af75e82b522884a39418fb13c4293a97"} Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.125015 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.125718 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" event={"ID":"a5693cbe-40f0-4201-b041-de2c16d0d036","Type":"ContainerStarted","Data":"e83f094f7054fb90c8bdb59a987d53d1ae1219430cccf5cb99cdc0d8e8cf3413"} Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.138803 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-z6rb2" podStartSLOduration=2.13878426 podStartE2EDuration="2.13878426s" podCreationTimestamp="2026-01-28 16:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:00:16.138234315 +0000 UTC m=+888.414205826" watchObservedRunningTime="2026-01-28 16:00:16.13878426 +0000 UTC m=+888.414755771" Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.413815 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.427173 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0d21f1ab-8818-44fd-b525-1b2319a775a1-memberlist\") pod \"speaker-2hv4p\" (UID: \"0d21f1ab-8818-44fd-b525-1b2319a775a1\") " pod="metallb-system/speaker-2hv4p" Jan 28 16:00:16 crc kubenswrapper[4903]: I0128 16:00:16.578102 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hv4p" Jan 28 16:00:17 crc kubenswrapper[4903]: I0128 16:00:17.145208 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hv4p" event={"ID":"0d21f1ab-8818-44fd-b525-1b2319a775a1","Type":"ContainerStarted","Data":"4b5bf488d77c6ca45bd89d07bd3644b04a2e5909a391930b2804d4bbddc05890"} Jan 28 16:00:17 crc kubenswrapper[4903]: I0128 16:00:17.145602 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hv4p" event={"ID":"0d21f1ab-8818-44fd-b525-1b2319a775a1","Type":"ContainerStarted","Data":"c813275edb24f05fe93929630ac2087b688a725017656d591a26cd0070035431"} Jan 28 16:00:17 crc kubenswrapper[4903]: I0128 16:00:17.145623 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hv4p" event={"ID":"0d21f1ab-8818-44fd-b525-1b2319a775a1","Type":"ContainerStarted","Data":"55ed974a67788a314382f2ed4de680f64378daa32b9b053cab995a6305c51ea4"} Jan 28 16:00:17 crc kubenswrapper[4903]: I0128 16:00:17.145798 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2hv4p" Jan 28 16:00:17 crc kubenswrapper[4903]: I0128 16:00:17.164505 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2hv4p" podStartSLOduration=3.164486733 podStartE2EDuration="3.164486733s" podCreationTimestamp="2026-01-28 16:00:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:00:17.16179686 +0000 UTC m=+889.437768371" watchObservedRunningTime="2026-01-28 16:00:17.164486733 +0000 UTC m=+889.440458244" Jan 28 16:00:23 crc kubenswrapper[4903]: I0128 16:00:23.194057 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" event={"ID":"a5693cbe-40f0-4201-b041-de2c16d0d036","Type":"ContainerStarted","Data":"2580eece869d38d575fca11813f3ca0dc55dcad9fb862e5b2aa7aceb896cbd35"} Jan 28 16:00:23 crc kubenswrapper[4903]: I0128 16:00:23.194658 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:23 crc kubenswrapper[4903]: I0128 16:00:23.196161 4903 generic.go:334] "Generic (PLEG): container finished" podID="a59793c9-95fe-448d-999b-48f9e9f868c4" containerID="da3776affb2b0e3e42342706a97fd4d6fca289e89f86719c3c3466964518ca46" exitCode=0 Jan 28 16:00:23 crc kubenswrapper[4903]: I0128 16:00:23.196205 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerDied","Data":"da3776affb2b0e3e42342706a97fd4d6fca289e89f86719c3c3466964518ca46"} Jan 28 16:00:23 crc kubenswrapper[4903]: I0128 16:00:23.221498 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" podStartSLOduration=2.386188347 podStartE2EDuration="9.221484339s" podCreationTimestamp="2026-01-28 16:00:14 +0000 UTC" firstStartedPulling="2026-01-28 16:00:15.754542846 +0000 UTC m=+888.030514357" lastFinishedPulling="2026-01-28 16:00:22.589838838 +0000 UTC m=+894.865810349" observedRunningTime="2026-01-28 16:00:23.218853937 +0000 UTC m=+895.494825448" watchObservedRunningTime="2026-01-28 16:00:23.221484339 +0000 UTC m=+895.497455850" Jan 28 16:00:24 crc kubenswrapper[4903]: I0128 16:00:24.206294 4903 generic.go:334] "Generic (PLEG): container finished" podID="a59793c9-95fe-448d-999b-48f9e9f868c4" containerID="ab3df8269d853a26aae421ec975a51fc42876243db98bd24e67a21c19b48b63f" exitCode=0 Jan 28 16:00:24 crc kubenswrapper[4903]: I0128 16:00:24.206404 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerDied","Data":"ab3df8269d853a26aae421ec975a51fc42876243db98bd24e67a21c19b48b63f"} Jan 28 16:00:25 crc kubenswrapper[4903]: I0128 16:00:25.165590 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-z6rb2" Jan 28 16:00:25 crc kubenswrapper[4903]: I0128 16:00:25.214261 4903 generic.go:334] "Generic (PLEG): container finished" podID="a59793c9-95fe-448d-999b-48f9e9f868c4" containerID="52c41f5a84494ecc1486a79f7feac71bc45eab02ec7d1ec9b77cdc622f66dc91" exitCode=0 Jan 28 16:00:25 crc kubenswrapper[4903]: I0128 16:00:25.214313 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerDied","Data":"52c41f5a84494ecc1486a79f7feac71bc45eab02ec7d1ec9b77cdc622f66dc91"} Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.223890 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerStarted","Data":"ebc2377fc89718b11929e994072022ee20aa769832770fe7e744249fd495ae2a"} Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.224250 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.224266 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerStarted","Data":"06ddfe96d2f8ef9bae885902fec4c560f43c09b601e70f4177eb94108b483fbf"} Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.224281 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerStarted","Data":"65e7be0ea6fd4e36609a71eef781c2b27bec53f183b99d99bde8b2fd8fb583de"} Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.224293 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerStarted","Data":"732282b3d780b6c0cf556a468ce2aefdc96e2118868d80810ab9eecda9cf118f"} Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.224316 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerStarted","Data":"5ff6c0f600dfdd470d6f7d8b1dc4bf2aa7d9802c9b686d098c78c978ffb9a456"} Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.224329 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-769z8" event={"ID":"a59793c9-95fe-448d-999b-48f9e9f868c4","Type":"ContainerStarted","Data":"a0e0b92dd622a4268744d05f999fd00b038048a7f48915343506677ab2b9340d"} Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.252509 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-769z8" podStartSLOduration=4.746496882 podStartE2EDuration="12.252492431s" podCreationTimestamp="2026-01-28 16:00:14 +0000 UTC" firstStartedPulling="2026-01-28 16:00:15.06780217 +0000 UTC m=+887.343773681" lastFinishedPulling="2026-01-28 16:00:22.573797719 +0000 UTC m=+894.849769230" observedRunningTime="2026-01-28 16:00:26.250655311 +0000 UTC m=+898.526626832" watchObservedRunningTime="2026-01-28 16:00:26.252492431 +0000 UTC m=+898.528463942" Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.583213 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2hv4p" Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.613938 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:00:26 crc kubenswrapper[4903]: I0128 16:00:26.614002 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.005748 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr"] Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.006889 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.010207 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.015410 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr"] Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.094800 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.094925 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.094959 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkr2\" (UniqueName: \"kubernetes.io/projected/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-kube-api-access-4wkr2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.196205 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.196293 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.196331 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkr2\" (UniqueName: \"kubernetes.io/projected/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-kube-api-access-4wkr2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.196799 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.196827 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.223455 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkr2\" (UniqueName: \"kubernetes.io/projected/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-kube-api-access-4wkr2\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.322170 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:28 crc kubenswrapper[4903]: I0128 16:00:28.732033 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr"] Jan 28 16:00:29 crc kubenswrapper[4903]: E0128 16:00:29.087285 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1841a983_e250_4f9e_8e7f_8a42a2b2bee0.slice/crio-d61b8ee6128af95d7da447952d17a993c310c16bc58d9ca5d4a37f6165294984.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1841a983_e250_4f9e_8e7f_8a42a2b2bee0.slice/crio-conmon-d61b8ee6128af95d7da447952d17a993c310c16bc58d9ca5d4a37f6165294984.scope\": RecentStats: unable to find data in memory cache]" Jan 28 16:00:29 crc kubenswrapper[4903]: I0128 16:00:29.240207 4903 generic.go:334] "Generic (PLEG): container finished" podID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerID="d61b8ee6128af95d7da447952d17a993c310c16bc58d9ca5d4a37f6165294984" exitCode=0 Jan 28 16:00:29 crc kubenswrapper[4903]: I0128 16:00:29.240268 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" event={"ID":"1841a983-e250-4f9e-8e7f-8a42a2b2bee0","Type":"ContainerDied","Data":"d61b8ee6128af95d7da447952d17a993c310c16bc58d9ca5d4a37f6165294984"} Jan 28 16:00:29 crc kubenswrapper[4903]: I0128 16:00:29.240552 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" event={"ID":"1841a983-e250-4f9e-8e7f-8a42a2b2bee0","Type":"ContainerStarted","Data":"ae8336dfce7458d96043bfa1c71d47bb2423cd6d0304e303787873dbfcc11221"} Jan 28 16:00:29 crc kubenswrapper[4903]: I0128 16:00:29.966011 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:30 crc kubenswrapper[4903]: I0128 16:00:30.006151 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.562474 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zrvqz"] Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.565303 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.570189 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrvqz"] Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.660285 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-utilities\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.660344 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-catalog-content\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.660377 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qwb7\" (UniqueName: \"kubernetes.io/projected/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-kube-api-access-9qwb7\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.762750 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-utilities\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.763306 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-catalog-content\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.763360 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-utilities\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.763434 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qwb7\" (UniqueName: \"kubernetes.io/projected/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-kube-api-access-9qwb7\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.763922 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-catalog-content\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.791547 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qwb7\" (UniqueName: \"kubernetes.io/projected/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-kube-api-access-9qwb7\") pod \"redhat-marketplace-zrvqz\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:32 crc kubenswrapper[4903]: I0128 16:00:32.892408 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:33 crc kubenswrapper[4903]: I0128 16:00:33.263893 4903 generic.go:334] "Generic (PLEG): container finished" podID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerID="fad3e03621204f557bd756c0deb121cf3fbc693958943668a001c905198fa51c" exitCode=0 Jan 28 16:00:33 crc kubenswrapper[4903]: I0128 16:00:33.263995 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" event={"ID":"1841a983-e250-4f9e-8e7f-8a42a2b2bee0","Type":"ContainerDied","Data":"fad3e03621204f557bd756c0deb121cf3fbc693958943668a001c905198fa51c"} Jan 28 16:00:33 crc kubenswrapper[4903]: I0128 16:00:33.307439 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrvqz"] Jan 28 16:00:33 crc kubenswrapper[4903]: W0128 16:00:33.336654 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod310b9e14_79e6_4fbf_904c_bf2752bcfb8a.slice/crio-a10cd78e148ff64a209fcff2d1c79481e7e8b2aef3ec7fca665012308e19ae2c WatchSource:0}: Error finding container a10cd78e148ff64a209fcff2d1c79481e7e8b2aef3ec7fca665012308e19ae2c: Status 404 returned error can't find the container with id a10cd78e148ff64a209fcff2d1c79481e7e8b2aef3ec7fca665012308e19ae2c Jan 28 16:00:34 crc kubenswrapper[4903]: I0128 16:00:34.273878 4903 generic.go:334] "Generic (PLEG): container finished" podID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerID="daa593b2f42a33362a0cd2e51aeb8e1b54f2b40e5c8e64d2d987a1700678f3af" exitCode=0 Jan 28 16:00:34 crc kubenswrapper[4903]: I0128 16:00:34.274072 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrvqz" event={"ID":"310b9e14-79e6-4fbf-904c-bf2752bcfb8a","Type":"ContainerDied","Data":"daa593b2f42a33362a0cd2e51aeb8e1b54f2b40e5c8e64d2d987a1700678f3af"} Jan 28 16:00:34 crc kubenswrapper[4903]: I0128 16:00:34.274444 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrvqz" event={"ID":"310b9e14-79e6-4fbf-904c-bf2752bcfb8a","Type":"ContainerStarted","Data":"a10cd78e148ff64a209fcff2d1c79481e7e8b2aef3ec7fca665012308e19ae2c"} Jan 28 16:00:34 crc kubenswrapper[4903]: I0128 16:00:34.282623 4903 generic.go:334] "Generic (PLEG): container finished" podID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerID="81fea2cdac0d76769aa814ee5c4d23d99d0e99d9a32dca38a4318f67434b5504" exitCode=0 Jan 28 16:00:34 crc kubenswrapper[4903]: I0128 16:00:34.282758 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" event={"ID":"1841a983-e250-4f9e-8e7f-8a42a2b2bee0","Type":"ContainerDied","Data":"81fea2cdac0d76769aa814ee5c4d23d99d0e99d9a32dca38a4318f67434b5504"} Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.291716 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrvqz" event={"ID":"310b9e14-79e6-4fbf-904c-bf2752bcfb8a","Type":"ContainerStarted","Data":"0705e59ffab37ae83bd3202f7493e2abb9703e9757dfb7cf942cb89f7393d62a"} Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.555312 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ll2pz" Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.596977 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.712941 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-util\") pod \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.713008 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-bundle\") pod \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.713074 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkr2\" (UniqueName: \"kubernetes.io/projected/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-kube-api-access-4wkr2\") pod \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\" (UID: \"1841a983-e250-4f9e-8e7f-8a42a2b2bee0\") " Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.714720 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-bundle" (OuterVolumeSpecName: "bundle") pod "1841a983-e250-4f9e-8e7f-8a42a2b2bee0" (UID: "1841a983-e250-4f9e-8e7f-8a42a2b2bee0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.722637 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-kube-api-access-4wkr2" (OuterVolumeSpecName: "kube-api-access-4wkr2") pod "1841a983-e250-4f9e-8e7f-8a42a2b2bee0" (UID: "1841a983-e250-4f9e-8e7f-8a42a2b2bee0"). InnerVolumeSpecName "kube-api-access-4wkr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.728742 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-util" (OuterVolumeSpecName: "util") pod "1841a983-e250-4f9e-8e7f-8a42a2b2bee0" (UID: "1841a983-e250-4f9e-8e7f-8a42a2b2bee0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.815148 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkr2\" (UniqueName: \"kubernetes.io/projected/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-kube-api-access-4wkr2\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.815201 4903 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-util\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:35 crc kubenswrapper[4903]: I0128 16:00:35.815213 4903 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1841a983-e250-4f9e-8e7f-8a42a2b2bee0-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:36 crc kubenswrapper[4903]: I0128 16:00:36.301046 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" Jan 28 16:00:36 crc kubenswrapper[4903]: I0128 16:00:36.301031 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aklfnr" event={"ID":"1841a983-e250-4f9e-8e7f-8a42a2b2bee0","Type":"ContainerDied","Data":"ae8336dfce7458d96043bfa1c71d47bb2423cd6d0304e303787873dbfcc11221"} Jan 28 16:00:36 crc kubenswrapper[4903]: I0128 16:00:36.301504 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8336dfce7458d96043bfa1c71d47bb2423cd6d0304e303787873dbfcc11221" Jan 28 16:00:36 crc kubenswrapper[4903]: I0128 16:00:36.303167 4903 generic.go:334] "Generic (PLEG): container finished" podID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerID="0705e59ffab37ae83bd3202f7493e2abb9703e9757dfb7cf942cb89f7393d62a" exitCode=0 Jan 28 16:00:36 crc kubenswrapper[4903]: I0128 16:00:36.303206 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrvqz" event={"ID":"310b9e14-79e6-4fbf-904c-bf2752bcfb8a","Type":"ContainerDied","Data":"0705e59ffab37ae83bd3202f7493e2abb9703e9757dfb7cf942cb89f7393d62a"} Jan 28 16:00:37 crc kubenswrapper[4903]: I0128 16:00:37.311005 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrvqz" event={"ID":"310b9e14-79e6-4fbf-904c-bf2752bcfb8a","Type":"ContainerStarted","Data":"9b10a52a4283eb5ebb9384c62a0a279c2b132abd856e046fc6d2130db76cb3a9"} Jan 28 16:00:37 crc kubenswrapper[4903]: I0128 16:00:37.329937 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zrvqz" podStartSLOduration=2.881331237 podStartE2EDuration="5.329921652s" podCreationTimestamp="2026-01-28 16:00:32 +0000 UTC" firstStartedPulling="2026-01-28 16:00:34.276830078 +0000 UTC m=+906.552801589" lastFinishedPulling="2026-01-28 16:00:36.725420493 +0000 UTC m=+909.001392004" observedRunningTime="2026-01-28 16:00:37.329075339 +0000 UTC m=+909.605046850" watchObservedRunningTime="2026-01-28 16:00:37.329921652 +0000 UTC m=+909.605893153" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.459144 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv"] Jan 28 16:00:39 crc kubenswrapper[4903]: E0128 16:00:39.460700 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerName="util" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.460890 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerName="util" Jan 28 16:00:39 crc kubenswrapper[4903]: E0128 16:00:39.460959 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerName="pull" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.461008 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerName="pull" Jan 28 16:00:39 crc kubenswrapper[4903]: E0128 16:00:39.461063 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerName="extract" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.461109 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerName="extract" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.461284 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1841a983-e250-4f9e-8e7f-8a42a2b2bee0" containerName="extract" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.461726 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.463934 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.464283 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.464580 4903 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-rgg9l" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.515466 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv"] Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.559470 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc10878-16a6-4e38-87bc-7c9548e3fa88-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5hmsv\" (UID: \"9cc10878-16a6-4e38-87bc-7c9548e3fa88\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.559599 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dq5\" (UniqueName: \"kubernetes.io/projected/9cc10878-16a6-4e38-87bc-7c9548e3fa88-kube-api-access-r2dq5\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5hmsv\" (UID: \"9cc10878-16a6-4e38-87bc-7c9548e3fa88\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.660889 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2dq5\" (UniqueName: \"kubernetes.io/projected/9cc10878-16a6-4e38-87bc-7c9548e3fa88-kube-api-access-r2dq5\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5hmsv\" (UID: \"9cc10878-16a6-4e38-87bc-7c9548e3fa88\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.661025 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc10878-16a6-4e38-87bc-7c9548e3fa88-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5hmsv\" (UID: \"9cc10878-16a6-4e38-87bc-7c9548e3fa88\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.661669 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9cc10878-16a6-4e38-87bc-7c9548e3fa88-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5hmsv\" (UID: \"9cc10878-16a6-4e38-87bc-7c9548e3fa88\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.696468 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2dq5\" (UniqueName: \"kubernetes.io/projected/9cc10878-16a6-4e38-87bc-7c9548e3fa88-kube-api-access-r2dq5\") pod \"cert-manager-operator-controller-manager-64cf6dff88-5hmsv\" (UID: \"9cc10878-16a6-4e38-87bc-7c9548e3fa88\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:39 crc kubenswrapper[4903]: I0128 16:00:39.776866 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" Jan 28 16:00:40 crc kubenswrapper[4903]: I0128 16:00:40.217370 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv"] Jan 28 16:00:40 crc kubenswrapper[4903]: W0128 16:00:40.222216 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cc10878_16a6_4e38_87bc_7c9548e3fa88.slice/crio-f9a13c63ce75b6f8031f5293246e789bd98060b26cc8bb7391d68ff1935038ca WatchSource:0}: Error finding container f9a13c63ce75b6f8031f5293246e789bd98060b26cc8bb7391d68ff1935038ca: Status 404 returned error can't find the container with id f9a13c63ce75b6f8031f5293246e789bd98060b26cc8bb7391d68ff1935038ca Jan 28 16:00:40 crc kubenswrapper[4903]: I0128 16:00:40.328263 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" event={"ID":"9cc10878-16a6-4e38-87bc-7c9548e3fa88","Type":"ContainerStarted","Data":"f9a13c63ce75b6f8031f5293246e789bd98060b26cc8bb7391d68ff1935038ca"} Jan 28 16:00:42 crc kubenswrapper[4903]: I0128 16:00:42.892561 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:42 crc kubenswrapper[4903]: I0128 16:00:42.892918 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:42 crc kubenswrapper[4903]: I0128 16:00:42.933747 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:43 crc kubenswrapper[4903]: I0128 16:00:43.382862 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:44 crc kubenswrapper[4903]: I0128 16:00:44.969210 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-769z8" Jan 28 16:00:45 crc kubenswrapper[4903]: I0128 16:00:45.359167 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrvqz"] Jan 28 16:00:45 crc kubenswrapper[4903]: I0128 16:00:45.359395 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zrvqz" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="registry-server" containerID="cri-o://9b10a52a4283eb5ebb9384c62a0a279c2b132abd856e046fc6d2130db76cb3a9" gracePeriod=2 Jan 28 16:00:46 crc kubenswrapper[4903]: I0128 16:00:46.373141 4903 generic.go:334] "Generic (PLEG): container finished" podID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerID="9b10a52a4283eb5ebb9384c62a0a279c2b132abd856e046fc6d2130db76cb3a9" exitCode=0 Jan 28 16:00:46 crc kubenswrapper[4903]: I0128 16:00:46.373239 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrvqz" event={"ID":"310b9e14-79e6-4fbf-904c-bf2752bcfb8a","Type":"ContainerDied","Data":"9b10a52a4283eb5ebb9384c62a0a279c2b132abd856e046fc6d2130db76cb3a9"} Jan 28 16:00:48 crc kubenswrapper[4903]: I0128 16:00:48.727547 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:48 crc kubenswrapper[4903]: I0128 16:00:48.911819 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-catalog-content\") pod \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " Jan 28 16:00:48 crc kubenswrapper[4903]: I0128 16:00:48.912126 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-utilities\") pod \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " Jan 28 16:00:48 crc kubenswrapper[4903]: I0128 16:00:48.912272 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qwb7\" (UniqueName: \"kubernetes.io/projected/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-kube-api-access-9qwb7\") pod \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\" (UID: \"310b9e14-79e6-4fbf-904c-bf2752bcfb8a\") " Jan 28 16:00:48 crc kubenswrapper[4903]: I0128 16:00:48.913034 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-utilities" (OuterVolumeSpecName: "utilities") pod "310b9e14-79e6-4fbf-904c-bf2752bcfb8a" (UID: "310b9e14-79e6-4fbf-904c-bf2752bcfb8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:00:48 crc kubenswrapper[4903]: I0128 16:00:48.918670 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-kube-api-access-9qwb7" (OuterVolumeSpecName: "kube-api-access-9qwb7") pod "310b9e14-79e6-4fbf-904c-bf2752bcfb8a" (UID: "310b9e14-79e6-4fbf-904c-bf2752bcfb8a"). InnerVolumeSpecName "kube-api-access-9qwb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:00:48 crc kubenswrapper[4903]: I0128 16:00:48.946452 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "310b9e14-79e6-4fbf-904c-bf2752bcfb8a" (UID: "310b9e14-79e6-4fbf-904c-bf2752bcfb8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.014234 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.014268 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.014278 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qwb7\" (UniqueName: \"kubernetes.io/projected/310b9e14-79e6-4fbf-904c-bf2752bcfb8a-kube-api-access-9qwb7\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.399929 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zrvqz" event={"ID":"310b9e14-79e6-4fbf-904c-bf2752bcfb8a","Type":"ContainerDied","Data":"a10cd78e148ff64a209fcff2d1c79481e7e8b2aef3ec7fca665012308e19ae2c"} Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.400299 4903 scope.go:117] "RemoveContainer" containerID="9b10a52a4283eb5ebb9384c62a0a279c2b132abd856e046fc6d2130db76cb3a9" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.400000 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zrvqz" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.401556 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" event={"ID":"9cc10878-16a6-4e38-87bc-7c9548e3fa88","Type":"ContainerStarted","Data":"f28a57033cc926b2e29c426711afe03ecbc7162f48fde3ea181b9e99df82b5aa"} Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.419107 4903 scope.go:117] "RemoveContainer" containerID="0705e59ffab37ae83bd3202f7493e2abb9703e9757dfb7cf942cb89f7393d62a" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.429906 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-5hmsv" podStartSLOduration=2.110442874 podStartE2EDuration="10.42988352s" podCreationTimestamp="2026-01-28 16:00:39 +0000 UTC" firstStartedPulling="2026-01-28 16:00:40.224851877 +0000 UTC m=+912.500823388" lastFinishedPulling="2026-01-28 16:00:48.544292523 +0000 UTC m=+920.820264034" observedRunningTime="2026-01-28 16:00:49.427372062 +0000 UTC m=+921.703343583" watchObservedRunningTime="2026-01-28 16:00:49.42988352 +0000 UTC m=+921.705855031" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.442935 4903 scope.go:117] "RemoveContainer" containerID="daa593b2f42a33362a0cd2e51aeb8e1b54f2b40e5c8e64d2d987a1700678f3af" Jan 28 16:00:49 crc kubenswrapper[4903]: E0128 16:00:49.446493 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod310b9e14_79e6_4fbf_904c_bf2752bcfb8a.slice\": RecentStats: unable to find data in memory cache]" Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.448580 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrvqz"] Jan 28 16:00:49 crc kubenswrapper[4903]: I0128 16:00:49.453824 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zrvqz"] Jan 28 16:00:50 crc kubenswrapper[4903]: I0128 16:00:50.421454 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" path="/var/lib/kubelet/pods/310b9e14-79e6-4fbf-904c-bf2752bcfb8a/volumes" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.791535 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd"] Jan 28 16:00:52 crc kubenswrapper[4903]: E0128 16:00:52.791802 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="extract-utilities" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.791813 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="extract-utilities" Jan 28 16:00:52 crc kubenswrapper[4903]: E0128 16:00:52.791825 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="extract-content" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.791831 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="extract-content" Jan 28 16:00:52 crc kubenswrapper[4903]: E0128 16:00:52.791850 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="registry-server" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.791856 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="registry-server" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.791962 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="310b9e14-79e6-4fbf-904c-bf2752bcfb8a" containerName="registry-server" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.792434 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.794219 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.794363 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.796865 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zt666"] Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.798498 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.803390 4903 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7zjlb" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.803641 4903 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6fnwh" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.813743 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd"] Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.820019 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zt666"] Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.967200 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f9cc593-b7ca-4e05-9bc0-38fe9df43c52-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bc2bd\" (UID: \"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.967264 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a9a7d66-649c-4d15-a681-ca87fd3dbb5a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zt666\" (UID: \"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.967304 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffpq\" (UniqueName: \"kubernetes.io/projected/5f9cc593-b7ca-4e05-9bc0-38fe9df43c52-kube-api-access-6ffpq\") pod \"cert-manager-cainjector-855d9ccff4-bc2bd\" (UID: \"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:52 crc kubenswrapper[4903]: I0128 16:00:52.967383 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrffs\" (UniqueName: \"kubernetes.io/projected/1a9a7d66-649c-4d15-a681-ca87fd3dbb5a-kube-api-access-wrffs\") pod \"cert-manager-webhook-f4fb5df64-zt666\" (UID: \"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.071896 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrffs\" (UniqueName: \"kubernetes.io/projected/1a9a7d66-649c-4d15-a681-ca87fd3dbb5a-kube-api-access-wrffs\") pod \"cert-manager-webhook-f4fb5df64-zt666\" (UID: \"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.072052 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f9cc593-b7ca-4e05-9bc0-38fe9df43c52-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bc2bd\" (UID: \"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.072110 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a9a7d66-649c-4d15-a681-ca87fd3dbb5a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zt666\" (UID: \"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.072136 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffpq\" (UniqueName: \"kubernetes.io/projected/5f9cc593-b7ca-4e05-9bc0-38fe9df43c52-kube-api-access-6ffpq\") pod \"cert-manager-cainjector-855d9ccff4-bc2bd\" (UID: \"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.092920 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrffs\" (UniqueName: \"kubernetes.io/projected/1a9a7d66-649c-4d15-a681-ca87fd3dbb5a-kube-api-access-wrffs\") pod \"cert-manager-webhook-f4fb5df64-zt666\" (UID: \"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.094703 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f9cc593-b7ca-4e05-9bc0-38fe9df43c52-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-bc2bd\" (UID: \"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.097062 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a9a7d66-649c-4d15-a681-ca87fd3dbb5a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-zt666\" (UID: \"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.098657 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffpq\" (UniqueName: \"kubernetes.io/projected/5f9cc593-b7ca-4e05-9bc0-38fe9df43c52-kube-api-access-6ffpq\") pod \"cert-manager-cainjector-855d9ccff4-bc2bd\" (UID: \"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.108634 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.115046 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.562165 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-zt666"] Jan 28 16:00:53 crc kubenswrapper[4903]: I0128 16:00:53.581587 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd"] Jan 28 16:00:53 crc kubenswrapper[4903]: W0128 16:00:53.590074 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f9cc593_b7ca_4e05_9bc0_38fe9df43c52.slice/crio-2e756f50d46d407cf2053950e9a011bd395d8de7914f7e5295e0d8bbeb1e059e WatchSource:0}: Error finding container 2e756f50d46d407cf2053950e9a011bd395d8de7914f7e5295e0d8bbeb1e059e: Status 404 returned error can't find the container with id 2e756f50d46d407cf2053950e9a011bd395d8de7914f7e5295e0d8bbeb1e059e Jan 28 16:00:54 crc kubenswrapper[4903]: I0128 16:00:54.431793 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" event={"ID":"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52","Type":"ContainerStarted","Data":"2e756f50d46d407cf2053950e9a011bd395d8de7914f7e5295e0d8bbeb1e059e"} Jan 28 16:00:54 crc kubenswrapper[4903]: I0128 16:00:54.432764 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" event={"ID":"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a","Type":"ContainerStarted","Data":"00dfa81ea16be97134bffb95df7f6d60d769a413f637a3c268aefc94c4656f62"} Jan 28 16:00:56 crc kubenswrapper[4903]: I0128 16:00:56.613264 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:00:56 crc kubenswrapper[4903]: I0128 16:00:56.613626 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:01:01 crc kubenswrapper[4903]: I0128 16:01:01.493036 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" event={"ID":"5f9cc593-b7ca-4e05-9bc0-38fe9df43c52","Type":"ContainerStarted","Data":"bd8c3cced68ac1e687e484cb84c3be1cf27854bed617ca9bd7fd6582c151dcc7"} Jan 28 16:01:01 crc kubenswrapper[4903]: I0128 16:01:01.494659 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" event={"ID":"1a9a7d66-649c-4d15-a681-ca87fd3dbb5a","Type":"ContainerStarted","Data":"382e7e2c058ce93b7ebdbfe288d3444da892755efa6c21c10d7ddfe56a5750a7"} Jan 28 16:01:01 crc kubenswrapper[4903]: I0128 16:01:01.494835 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:01:01 crc kubenswrapper[4903]: I0128 16:01:01.520105 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-bc2bd" podStartSLOduration=2.217638251 podStartE2EDuration="9.520085151s" podCreationTimestamp="2026-01-28 16:00:52 +0000 UTC" firstStartedPulling="2026-01-28 16:00:53.591445349 +0000 UTC m=+925.867416860" lastFinishedPulling="2026-01-28 16:01:00.893892249 +0000 UTC m=+933.169863760" observedRunningTime="2026-01-28 16:01:01.518938861 +0000 UTC m=+933.794910382" watchObservedRunningTime="2026-01-28 16:01:01.520085151 +0000 UTC m=+933.796056662" Jan 28 16:01:01 crc kubenswrapper[4903]: I0128 16:01:01.539965 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" podStartSLOduration=2.224744435 podStartE2EDuration="9.539943754s" podCreationTimestamp="2026-01-28 16:00:52 +0000 UTC" firstStartedPulling="2026-01-28 16:00:53.572719417 +0000 UTC m=+925.848690928" lastFinishedPulling="2026-01-28 16:01:00.887918726 +0000 UTC m=+933.163890247" observedRunningTime="2026-01-28 16:01:01.538930146 +0000 UTC m=+933.814901667" watchObservedRunningTime="2026-01-28 16:01:01.539943754 +0000 UTC m=+933.815915275" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.118583 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-zt666" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.635223 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k6sz7"] Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.636985 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.654247 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6sz7"] Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.699460 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq59d\" (UniqueName: \"kubernetes.io/projected/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-kube-api-access-xq59d\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.699800 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-utilities\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.700150 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-catalog-content\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.801445 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-utilities\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.801824 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-catalog-content\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.802023 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-utilities\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.802264 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-catalog-content\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.802646 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq59d\" (UniqueName: \"kubernetes.io/projected/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-kube-api-access-xq59d\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.839482 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq59d\" (UniqueName: \"kubernetes.io/projected/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-kube-api-access-xq59d\") pod \"certified-operators-k6sz7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:08 crc kubenswrapper[4903]: I0128 16:01:08.960489 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:09 crc kubenswrapper[4903]: I0128 16:01:09.424145 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6sz7"] Jan 28 16:01:09 crc kubenswrapper[4903]: I0128 16:01:09.545259 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6sz7" event={"ID":"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7","Type":"ContainerStarted","Data":"957ce256bc2648aac4de707999403cef64007da25a302eee3e4a95b992c246a7"} Jan 28 16:01:10 crc kubenswrapper[4903]: I0128 16:01:10.553009 4903 generic.go:334] "Generic (PLEG): container finished" podID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerID="8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c" exitCode=0 Jan 28 16:01:10 crc kubenswrapper[4903]: I0128 16:01:10.553080 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6sz7" event={"ID":"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7","Type":"ContainerDied","Data":"8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c"} Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.144499 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nrvgq"] Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.145560 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.147955 4903 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wjq99" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.156228 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nrvgq"] Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.235066 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8713d75-2d9e-424d-ae5d-6134f032503c-bound-sa-token\") pod \"cert-manager-86cb77c54b-nrvgq\" (UID: \"c8713d75-2d9e-424d-ae5d-6134f032503c\") " pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.235125 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmdhm\" (UniqueName: \"kubernetes.io/projected/c8713d75-2d9e-424d-ae5d-6134f032503c-kube-api-access-zmdhm\") pod \"cert-manager-86cb77c54b-nrvgq\" (UID: \"c8713d75-2d9e-424d-ae5d-6134f032503c\") " pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.335437 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmdhm\" (UniqueName: \"kubernetes.io/projected/c8713d75-2d9e-424d-ae5d-6134f032503c-kube-api-access-zmdhm\") pod \"cert-manager-86cb77c54b-nrvgq\" (UID: \"c8713d75-2d9e-424d-ae5d-6134f032503c\") " pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.335818 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8713d75-2d9e-424d-ae5d-6134f032503c-bound-sa-token\") pod \"cert-manager-86cb77c54b-nrvgq\" (UID: \"c8713d75-2d9e-424d-ae5d-6134f032503c\") " pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.357412 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8713d75-2d9e-424d-ae5d-6134f032503c-bound-sa-token\") pod \"cert-manager-86cb77c54b-nrvgq\" (UID: \"c8713d75-2d9e-424d-ae5d-6134f032503c\") " pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.364862 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmdhm\" (UniqueName: \"kubernetes.io/projected/c8713d75-2d9e-424d-ae5d-6134f032503c-kube-api-access-zmdhm\") pod \"cert-manager-86cb77c54b-nrvgq\" (UID: \"c8713d75-2d9e-424d-ae5d-6134f032503c\") " pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.467393 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-nrvgq" Jan 28 16:01:11 crc kubenswrapper[4903]: I0128 16:01:11.684524 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-nrvgq"] Jan 28 16:01:12 crc kubenswrapper[4903]: I0128 16:01:12.581569 4903 generic.go:334] "Generic (PLEG): container finished" podID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerID="10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa" exitCode=0 Jan 28 16:01:12 crc kubenswrapper[4903]: I0128 16:01:12.581708 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6sz7" event={"ID":"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7","Type":"ContainerDied","Data":"10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa"} Jan 28 16:01:12 crc kubenswrapper[4903]: I0128 16:01:12.584895 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nrvgq" event={"ID":"c8713d75-2d9e-424d-ae5d-6134f032503c","Type":"ContainerStarted","Data":"83c34acacfd6329943e5546427f22b78a5247517a47b78f91fbb19316779d8f1"} Jan 28 16:01:12 crc kubenswrapper[4903]: I0128 16:01:12.584982 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-nrvgq" event={"ID":"c8713d75-2d9e-424d-ae5d-6134f032503c","Type":"ContainerStarted","Data":"de66db30d4ab2c60eae1c725a64c4ebfcdd74cc81d3bceaeb3581e9dbbaa428f"} Jan 28 16:01:12 crc kubenswrapper[4903]: I0128 16:01:12.621826 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-nrvgq" podStartSLOduration=1.6218066169999998 podStartE2EDuration="1.621806617s" podCreationTimestamp="2026-01-28 16:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:01:12.618811995 +0000 UTC m=+944.894783516" watchObservedRunningTime="2026-01-28 16:01:12.621806617 +0000 UTC m=+944.897778128" Jan 28 16:01:13 crc kubenswrapper[4903]: I0128 16:01:13.592653 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6sz7" event={"ID":"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7","Type":"ContainerStarted","Data":"8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7"} Jan 28 16:01:18 crc kubenswrapper[4903]: I0128 16:01:18.961587 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:18 crc kubenswrapper[4903]: I0128 16:01:18.962205 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:19 crc kubenswrapper[4903]: I0128 16:01:19.027779 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:19 crc kubenswrapper[4903]: I0128 16:01:19.047178 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k6sz7" podStartSLOduration=8.478263932 podStartE2EDuration="11.047161823s" podCreationTimestamp="2026-01-28 16:01:08 +0000 UTC" firstStartedPulling="2026-01-28 16:01:10.554692261 +0000 UTC m=+942.830663812" lastFinishedPulling="2026-01-28 16:01:13.123590192 +0000 UTC m=+945.399561703" observedRunningTime="2026-01-28 16:01:13.608842744 +0000 UTC m=+945.884814255" watchObservedRunningTime="2026-01-28 16:01:19.047161823 +0000 UTC m=+951.323133334" Jan 28 16:01:19 crc kubenswrapper[4903]: I0128 16:01:19.688327 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:19 crc kubenswrapper[4903]: I0128 16:01:19.734211 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6sz7"] Jan 28 16:01:21 crc kubenswrapper[4903]: I0128 16:01:21.640647 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k6sz7" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="registry-server" containerID="cri-o://8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7" gracePeriod=2 Jan 28 16:01:21 crc kubenswrapper[4903]: I0128 16:01:21.876193 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dvt8w"] Jan 28 16:01:21 crc kubenswrapper[4903]: I0128 16:01:21.877790 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dvt8w" Jan 28 16:01:21 crc kubenswrapper[4903]: I0128 16:01:21.881151 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 16:01:21 crc kubenswrapper[4903]: I0128 16:01:21.881297 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 16:01:21 crc kubenswrapper[4903]: I0128 16:01:21.881726 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-t6rjl" Jan 28 16:01:21 crc kubenswrapper[4903]: I0128 16:01:21.898759 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dvt8w"] Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.008727 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdndj\" (UniqueName: \"kubernetes.io/projected/a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26-kube-api-access-xdndj\") pod \"openstack-operator-index-dvt8w\" (UID: \"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26\") " pod="openstack-operators/openstack-operator-index-dvt8w" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.109895 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdndj\" (UniqueName: \"kubernetes.io/projected/a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26-kube-api-access-xdndj\") pod \"openstack-operator-index-dvt8w\" (UID: \"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26\") " pod="openstack-operators/openstack-operator-index-dvt8w" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.130409 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdndj\" (UniqueName: \"kubernetes.io/projected/a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26-kube-api-access-xdndj\") pod \"openstack-operator-index-dvt8w\" (UID: \"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26\") " pod="openstack-operators/openstack-operator-index-dvt8w" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.282270 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dvt8w" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.528505 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dvt8w"] Jan 28 16:01:22 crc kubenswrapper[4903]: W0128 16:01:22.529343 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1b2751f_4e94_4f1b_b03c_8bf3df7f2b26.slice/crio-5d9a5996fe0c1d8810a8db0010903bc9b15b0f052ec974f3dff44049cec25d99 WatchSource:0}: Error finding container 5d9a5996fe0c1d8810a8db0010903bc9b15b0f052ec974f3dff44049cec25d99: Status 404 returned error can't find the container with id 5d9a5996fe0c1d8810a8db0010903bc9b15b0f052ec974f3dff44049cec25d99 Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.539787 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.650362 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dvt8w" event={"ID":"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26","Type":"ContainerStarted","Data":"5d9a5996fe0c1d8810a8db0010903bc9b15b0f052ec974f3dff44049cec25d99"} Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.654237 4903 generic.go:334] "Generic (PLEG): container finished" podID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerID="8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7" exitCode=0 Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.654294 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6sz7" event={"ID":"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7","Type":"ContainerDied","Data":"8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7"} Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.654329 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6sz7" event={"ID":"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7","Type":"ContainerDied","Data":"957ce256bc2648aac4de707999403cef64007da25a302eee3e4a95b992c246a7"} Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.654352 4903 scope.go:117] "RemoveContainer" containerID="8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.654492 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6sz7" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.695055 4903 scope.go:117] "RemoveContainer" containerID="10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.721913 4903 scope.go:117] "RemoveContainer" containerID="8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.722991 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-utilities\") pod \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.723042 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq59d\" (UniqueName: \"kubernetes.io/projected/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-kube-api-access-xq59d\") pod \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.723095 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-catalog-content\") pod \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\" (UID: \"a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7\") " Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.724971 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-utilities" (OuterVolumeSpecName: "utilities") pod "a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" (UID: "a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.730426 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-kube-api-access-xq59d" (OuterVolumeSpecName: "kube-api-access-xq59d") pod "a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" (UID: "a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7"). InnerVolumeSpecName "kube-api-access-xq59d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.752295 4903 scope.go:117] "RemoveContainer" containerID="8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7" Jan 28 16:01:22 crc kubenswrapper[4903]: E0128 16:01:22.752811 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7\": container with ID starting with 8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7 not found: ID does not exist" containerID="8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.752844 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7"} err="failed to get container status \"8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7\": rpc error: code = NotFound desc = could not find container \"8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7\": container with ID starting with 8da33610eb2b6bf3d83cf249da1275c433099301956b05ae0cd01680f01cc6e7 not found: ID does not exist" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.752869 4903 scope.go:117] "RemoveContainer" containerID="10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa" Jan 28 16:01:22 crc kubenswrapper[4903]: E0128 16:01:22.753798 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa\": container with ID starting with 10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa not found: ID does not exist" containerID="10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.753844 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa"} err="failed to get container status \"10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa\": rpc error: code = NotFound desc = could not find container \"10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa\": container with ID starting with 10d35a2b734ba070d4d7e89d2ecf549b40f75a3e09099c277583f5c606d6c8aa not found: ID does not exist" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.753873 4903 scope.go:117] "RemoveContainer" containerID="8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c" Jan 28 16:01:22 crc kubenswrapper[4903]: E0128 16:01:22.754283 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c\": container with ID starting with 8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c not found: ID does not exist" containerID="8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.754322 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c"} err="failed to get container status \"8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c\": rpc error: code = NotFound desc = could not find container \"8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c\": container with ID starting with 8cc47e0a26aa8d786d290ff9096adc90313902484a70a70490a3bcbe19397c6c not found: ID does not exist" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.824386 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:22 crc kubenswrapper[4903]: I0128 16:01:22.825048 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq59d\" (UniqueName: \"kubernetes.io/projected/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-kube-api-access-xq59d\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:23 crc kubenswrapper[4903]: I0128 16:01:23.561890 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" (UID: "a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:01:23 crc kubenswrapper[4903]: I0128 16:01:23.646768 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:23 crc kubenswrapper[4903]: I0128 16:01:23.885928 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6sz7"] Jan 28 16:01:23 crc kubenswrapper[4903]: I0128 16:01:23.893908 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k6sz7"] Jan 28 16:01:24 crc kubenswrapper[4903]: I0128 16:01:24.425551 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" path="/var/lib/kubelet/pods/a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7/volumes" Jan 28 16:01:24 crc kubenswrapper[4903]: I0128 16:01:24.668447 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dvt8w" event={"ID":"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26","Type":"ContainerStarted","Data":"07e24f6079c5e392b382c072834c05bf43045f24633ffcb85113ce33db27ff6b"} Jan 28 16:01:24 crc kubenswrapper[4903]: I0128 16:01:24.686088 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dvt8w" podStartSLOduration=2.482814349 podStartE2EDuration="3.68603653s" podCreationTimestamp="2026-01-28 16:01:21 +0000 UTC" firstStartedPulling="2026-01-28 16:01:22.531334772 +0000 UTC m=+954.807306293" lastFinishedPulling="2026-01-28 16:01:23.734556963 +0000 UTC m=+956.010528474" observedRunningTime="2026-01-28 16:01:24.685488564 +0000 UTC m=+956.961460095" watchObservedRunningTime="2026-01-28 16:01:24.68603653 +0000 UTC m=+956.962008081" Jan 28 16:01:26 crc kubenswrapper[4903]: I0128 16:01:26.613600 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:01:26 crc kubenswrapper[4903]: I0128 16:01:26.613685 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:01:26 crc kubenswrapper[4903]: I0128 16:01:26.613749 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:01:26 crc kubenswrapper[4903]: I0128 16:01:26.614569 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"954d27b4dc9851fdaed58cb75beeee55d01523bc8e8b245b32b2ba4b08a3a068"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:01:26 crc kubenswrapper[4903]: I0128 16:01:26.614942 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://954d27b4dc9851fdaed58cb75beeee55d01523bc8e8b245b32b2ba4b08a3a068" gracePeriod=600 Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.461120 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-dvt8w"] Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.461678 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-dvt8w" podUID="a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26" containerName="registry-server" containerID="cri-o://07e24f6079c5e392b382c072834c05bf43045f24633ffcb85113ce33db27ff6b" gracePeriod=2 Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.698730 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="954d27b4dc9851fdaed58cb75beeee55d01523bc8e8b245b32b2ba4b08a3a068" exitCode=0 Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.698918 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"954d27b4dc9851fdaed58cb75beeee55d01523bc8e8b245b32b2ba4b08a3a068"} Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.700946 4903 scope.go:117] "RemoveContainer" containerID="076145b459522bcd2bea9cb08cae4aa7b63523e3096a45977e0c8639d4b92ae4" Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.700802 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"6c77af858064eabcd955be524624cd22b78fb67a11240b85f365bfaee93bd9c0"} Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.704019 4903 generic.go:334] "Generic (PLEG): container finished" podID="a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26" containerID="07e24f6079c5e392b382c072834c05bf43045f24633ffcb85113ce33db27ff6b" exitCode=0 Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.704071 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dvt8w" event={"ID":"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26","Type":"ContainerDied","Data":"07e24f6079c5e392b382c072834c05bf43045f24633ffcb85113ce33db27ff6b"} Jan 28 16:01:27 crc kubenswrapper[4903]: I0128 16:01:27.880671 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dvt8w" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.008123 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdndj\" (UniqueName: \"kubernetes.io/projected/a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26-kube-api-access-xdndj\") pod \"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26\" (UID: \"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26\") " Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.013626 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26-kube-api-access-xdndj" (OuterVolumeSpecName: "kube-api-access-xdndj") pod "a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26" (UID: "a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26"). InnerVolumeSpecName "kube-api-access-xdndj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.109894 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdndj\" (UniqueName: \"kubernetes.io/projected/a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26-kube-api-access-xdndj\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.273980 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l6m6l"] Jan 28 16:01:28 crc kubenswrapper[4903]: E0128 16:01:28.274572 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="registry-server" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.274688 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="registry-server" Jan 28 16:01:28 crc kubenswrapper[4903]: E0128 16:01:28.274801 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26" containerName="registry-server" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.274871 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26" containerName="registry-server" Jan 28 16:01:28 crc kubenswrapper[4903]: E0128 16:01:28.274924 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="extract-utilities" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.274984 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="extract-utilities" Jan 28 16:01:28 crc kubenswrapper[4903]: E0128 16:01:28.275075 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="extract-content" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.275144 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="extract-content" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.275359 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b8617f-dd8e-4dd8-9b33-caa8ebc95ad7" containerName="registry-server" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.275431 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26" containerName="registry-server" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.276186 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.280388 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l6m6l"] Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.417611 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2f5z\" (UniqueName: \"kubernetes.io/projected/54efebb0-5c66-455f-9dc1-b822148ae462-kube-api-access-j2f5z\") pod \"openstack-operator-index-l6m6l\" (UID: \"54efebb0-5c66-455f-9dc1-b822148ae462\") " pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.518865 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2f5z\" (UniqueName: \"kubernetes.io/projected/54efebb0-5c66-455f-9dc1-b822148ae462-kube-api-access-j2f5z\") pod \"openstack-operator-index-l6m6l\" (UID: \"54efebb0-5c66-455f-9dc1-b822148ae462\") " pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.538921 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2f5z\" (UniqueName: \"kubernetes.io/projected/54efebb0-5c66-455f-9dc1-b822148ae462-kube-api-access-j2f5z\") pod \"openstack-operator-index-l6m6l\" (UID: \"54efebb0-5c66-455f-9dc1-b822148ae462\") " pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.598671 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.718678 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dvt8w" event={"ID":"a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26","Type":"ContainerDied","Data":"5d9a5996fe0c1d8810a8db0010903bc9b15b0f052ec974f3dff44049cec25d99"} Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.718719 4903 scope.go:117] "RemoveContainer" containerID="07e24f6079c5e392b382c072834c05bf43045f24633ffcb85113ce33db27ff6b" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.718809 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dvt8w" Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.744632 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-dvt8w"] Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.752754 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-dvt8w"] Jan 28 16:01:28 crc kubenswrapper[4903]: I0128 16:01:28.990901 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l6m6l"] Jan 28 16:01:29 crc kubenswrapper[4903]: W0128 16:01:29.011372 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54efebb0_5c66_455f_9dc1_b822148ae462.slice/crio-e3ee19ed1aef0abe7d91575c3d51c127910d33c8b5d00949829b25df770eab05 WatchSource:0}: Error finding container e3ee19ed1aef0abe7d91575c3d51c127910d33c8b5d00949829b25df770eab05: Status 404 returned error can't find the container with id e3ee19ed1aef0abe7d91575c3d51c127910d33c8b5d00949829b25df770eab05 Jan 28 16:01:29 crc kubenswrapper[4903]: I0128 16:01:29.729237 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l6m6l" event={"ID":"54efebb0-5c66-455f-9dc1-b822148ae462","Type":"ContainerStarted","Data":"e3ee19ed1aef0abe7d91575c3d51c127910d33c8b5d00949829b25df770eab05"} Jan 28 16:01:30 crc kubenswrapper[4903]: I0128 16:01:30.434095 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26" path="/var/lib/kubelet/pods/a1b2751f-4e94-4f1b-b03c-8bf3df7f2b26/volumes" Jan 28 16:01:30 crc kubenswrapper[4903]: I0128 16:01:30.736476 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l6m6l" event={"ID":"54efebb0-5c66-455f-9dc1-b822148ae462","Type":"ContainerStarted","Data":"d5a288e6d69a53d62ac9c90cd4ffee8bebb80dd1efea5bddef3c9ae66c9421da"} Jan 28 16:01:30 crc kubenswrapper[4903]: I0128 16:01:30.760926 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l6m6l" podStartSLOduration=1.8470689949999999 podStartE2EDuration="2.760909474s" podCreationTimestamp="2026-01-28 16:01:28 +0000 UTC" firstStartedPulling="2026-01-28 16:01:29.015909205 +0000 UTC m=+961.291880716" lastFinishedPulling="2026-01-28 16:01:29.929749664 +0000 UTC m=+962.205721195" observedRunningTime="2026-01-28 16:01:30.751849146 +0000 UTC m=+963.027820667" watchObservedRunningTime="2026-01-28 16:01:30.760909474 +0000 UTC m=+963.036880985" Jan 28 16:01:38 crc kubenswrapper[4903]: I0128 16:01:38.598890 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:38 crc kubenswrapper[4903]: I0128 16:01:38.599785 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:38 crc kubenswrapper[4903]: I0128 16:01:38.650322 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:38 crc kubenswrapper[4903]: I0128 16:01:38.814626 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l6m6l" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.337552 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8"] Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.340705 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.343084 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-h8vg5" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.349964 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8"] Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.447631 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpt4m\" (UniqueName: \"kubernetes.io/projected/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-kube-api-access-wpt4m\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.447706 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-bundle\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.447758 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-util\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.548699 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpt4m\" (UniqueName: \"kubernetes.io/projected/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-kube-api-access-wpt4m\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.548976 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-bundle\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.549059 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-util\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.550057 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-util\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.550184 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-bundle\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.571259 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpt4m\" (UniqueName: \"kubernetes.io/projected/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-kube-api-access-wpt4m\") pod \"c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.686482 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:52 crc kubenswrapper[4903]: I0128 16:01:52.877514 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8"] Jan 28 16:01:53 crc kubenswrapper[4903]: I0128 16:01:53.888877 4903 generic.go:334] "Generic (PLEG): container finished" podID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerID="f6178ffec07a59e77491a31870229f847e507063e08ef98758dd6298d7038b06" exitCode=0 Jan 28 16:01:53 crc kubenswrapper[4903]: I0128 16:01:53.889427 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" event={"ID":"ff950f6c-eeb5-4726-a8fd-214ea6097cf8","Type":"ContainerDied","Data":"f6178ffec07a59e77491a31870229f847e507063e08ef98758dd6298d7038b06"} Jan 28 16:01:53 crc kubenswrapper[4903]: I0128 16:01:53.889562 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" event={"ID":"ff950f6c-eeb5-4726-a8fd-214ea6097cf8","Type":"ContainerStarted","Data":"7e1c8a067881cfc9a2482f3432b14d59ece6322283242ae089db766ba3b70d49"} Jan 28 16:01:54 crc kubenswrapper[4903]: I0128 16:01:54.897639 4903 generic.go:334] "Generic (PLEG): container finished" podID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerID="96318f72a56607ad17cd1f4077859393a23161f70e59693e410677145bd0cfb6" exitCode=0 Jan 28 16:01:54 crc kubenswrapper[4903]: I0128 16:01:54.897690 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" event={"ID":"ff950f6c-eeb5-4726-a8fd-214ea6097cf8","Type":"ContainerDied","Data":"96318f72a56607ad17cd1f4077859393a23161f70e59693e410677145bd0cfb6"} Jan 28 16:01:55 crc kubenswrapper[4903]: I0128 16:01:55.911980 4903 generic.go:334] "Generic (PLEG): container finished" podID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerID="0aba889be03956c2836e67c490080d941562a85386d7f8abd0e6d77d9a5642ce" exitCode=0 Jan 28 16:01:55 crc kubenswrapper[4903]: I0128 16:01:55.912063 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" event={"ID":"ff950f6c-eeb5-4726-a8fd-214ea6097cf8","Type":"ContainerDied","Data":"0aba889be03956c2836e67c490080d941562a85386d7f8abd0e6d77d9a5642ce"} Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.151966 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.316987 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-bundle\") pod \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.317053 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-util\") pod \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.317135 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpt4m\" (UniqueName: \"kubernetes.io/projected/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-kube-api-access-wpt4m\") pod \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\" (UID: \"ff950f6c-eeb5-4726-a8fd-214ea6097cf8\") " Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.318503 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-bundle" (OuterVolumeSpecName: "bundle") pod "ff950f6c-eeb5-4726-a8fd-214ea6097cf8" (UID: "ff950f6c-eeb5-4726-a8fd-214ea6097cf8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.322684 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-kube-api-access-wpt4m" (OuterVolumeSpecName: "kube-api-access-wpt4m") pod "ff950f6c-eeb5-4726-a8fd-214ea6097cf8" (UID: "ff950f6c-eeb5-4726-a8fd-214ea6097cf8"). InnerVolumeSpecName "kube-api-access-wpt4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.331451 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-util" (OuterVolumeSpecName: "util") pod "ff950f6c-eeb5-4726-a8fd-214ea6097cf8" (UID: "ff950f6c-eeb5-4726-a8fd-214ea6097cf8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.418393 4903 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.418742 4903 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-util\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.418755 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpt4m\" (UniqueName: \"kubernetes.io/projected/ff950f6c-eeb5-4726-a8fd-214ea6097cf8-kube-api-access-wpt4m\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.927110 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" event={"ID":"ff950f6c-eeb5-4726-a8fd-214ea6097cf8","Type":"ContainerDied","Data":"7e1c8a067881cfc9a2482f3432b14d59ece6322283242ae089db766ba3b70d49"} Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.927148 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1c8a067881cfc9a2482f3432b14d59ece6322283242ae089db766ba3b70d49" Jan 28 16:01:57 crc kubenswrapper[4903]: I0128 16:01:57.927207 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c63724877025f1d18ba1fa29f8076dfd209b6bb3b67e44a6aa3755fab2xdvf8" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.884544 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-554f878768-lphwt"] Jan 28 16:02:02 crc kubenswrapper[4903]: E0128 16:02:02.885823 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerName="pull" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.885841 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerName="pull" Jan 28 16:02:02 crc kubenswrapper[4903]: E0128 16:02:02.885850 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerName="extract" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.885857 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerName="extract" Jan 28 16:02:02 crc kubenswrapper[4903]: E0128 16:02:02.885865 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerName="util" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.885871 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerName="util" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.885985 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff950f6c-eeb5-4726-a8fd-214ea6097cf8" containerName="extract" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.886371 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.888833 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-72p8l" Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.915550 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-554f878768-lphwt"] Jan 28 16:02:02 crc kubenswrapper[4903]: I0128 16:02:02.993676 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trcsj\" (UniqueName: \"kubernetes.io/projected/34213317-c77e-49d0-ab5b-d653672a13fd-kube-api-access-trcsj\") pod \"openstack-operator-controller-init-554f878768-lphwt\" (UID: \"34213317-c77e-49d0-ab5b-d653672a13fd\") " pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.076140 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8kl85"] Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.077429 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.091337 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kl85"] Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.095499 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trcsj\" (UniqueName: \"kubernetes.io/projected/34213317-c77e-49d0-ab5b-d653672a13fd-kube-api-access-trcsj\") pod \"openstack-operator-controller-init-554f878768-lphwt\" (UID: \"34213317-c77e-49d0-ab5b-d653672a13fd\") " pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.128676 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trcsj\" (UniqueName: \"kubernetes.io/projected/34213317-c77e-49d0-ab5b-d653672a13fd-kube-api-access-trcsj\") pod \"openstack-operator-controller-init-554f878768-lphwt\" (UID: \"34213317-c77e-49d0-ab5b-d653672a13fd\") " pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.196467 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665sj\" (UniqueName: \"kubernetes.io/projected/ee8fa7bd-cd42-437b-9fb9-b336025d6398-kube-api-access-665sj\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.196543 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-utilities\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.196603 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-catalog-content\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.208155 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.298773 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665sj\" (UniqueName: \"kubernetes.io/projected/ee8fa7bd-cd42-437b-9fb9-b336025d6398-kube-api-access-665sj\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.299051 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-utilities\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.299080 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-catalog-content\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.299521 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-catalog-content\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.299767 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-utilities\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.330793 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665sj\" (UniqueName: \"kubernetes.io/projected/ee8fa7bd-cd42-437b-9fb9-b336025d6398-kube-api-access-665sj\") pod \"community-operators-8kl85\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.393129 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.680841 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-554f878768-lphwt"] Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.686622 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8kl85"] Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.961035 4903 generic.go:334] "Generic (PLEG): container finished" podID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerID="90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933" exitCode=0 Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.961125 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kl85" event={"ID":"ee8fa7bd-cd42-437b-9fb9-b336025d6398","Type":"ContainerDied","Data":"90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933"} Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.961393 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kl85" event={"ID":"ee8fa7bd-cd42-437b-9fb9-b336025d6398","Type":"ContainerStarted","Data":"83743d28fd9f01c0925567f452cb9f76568cabb6fa77d9df4d4f94fba53e12de"} Jan 28 16:02:03 crc kubenswrapper[4903]: I0128 16:02:03.962597 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" event={"ID":"34213317-c77e-49d0-ab5b-d653672a13fd","Type":"ContainerStarted","Data":"3ba86d6574a9d61ba1fe1eb367e128a867a81fcb9bd0eb43cfed84daadac8ced"} Jan 28 16:02:04 crc kubenswrapper[4903]: I0128 16:02:04.970643 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kl85" event={"ID":"ee8fa7bd-cd42-437b-9fb9-b336025d6398","Type":"ContainerStarted","Data":"1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04"} Jan 28 16:02:05 crc kubenswrapper[4903]: I0128 16:02:05.980659 4903 generic.go:334] "Generic (PLEG): container finished" podID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerID="1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04" exitCode=0 Jan 28 16:02:05 crc kubenswrapper[4903]: I0128 16:02:05.980706 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kl85" event={"ID":"ee8fa7bd-cd42-437b-9fb9-b336025d6398","Type":"ContainerDied","Data":"1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04"} Jan 28 16:02:09 crc kubenswrapper[4903]: I0128 16:02:09.000042 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kl85" event={"ID":"ee8fa7bd-cd42-437b-9fb9-b336025d6398","Type":"ContainerStarted","Data":"466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632"} Jan 28 16:02:09 crc kubenswrapper[4903]: I0128 16:02:09.001480 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" event={"ID":"34213317-c77e-49d0-ab5b-d653672a13fd","Type":"ContainerStarted","Data":"c26c4458888c4157fbcefad045db9cba3ac3459f2d03a9080940ebdbee413c64"} Jan 28 16:02:09 crc kubenswrapper[4903]: I0128 16:02:09.001713 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" Jan 28 16:02:09 crc kubenswrapper[4903]: I0128 16:02:09.022856 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8kl85" podStartSLOduration=1.63267928 podStartE2EDuration="6.022838087s" podCreationTimestamp="2026-01-28 16:02:03 +0000 UTC" firstStartedPulling="2026-01-28 16:02:03.962176185 +0000 UTC m=+996.238147696" lastFinishedPulling="2026-01-28 16:02:08.352334992 +0000 UTC m=+1000.628306503" observedRunningTime="2026-01-28 16:02:09.017609064 +0000 UTC m=+1001.293580585" watchObservedRunningTime="2026-01-28 16:02:09.022838087 +0000 UTC m=+1001.298809608" Jan 28 16:02:13 crc kubenswrapper[4903]: I0128 16:02:13.211254 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" Jan 28 16:02:13 crc kubenswrapper[4903]: I0128 16:02:13.250863 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-554f878768-lphwt" podStartSLOduration=6.582825114 podStartE2EDuration="11.250839325s" podCreationTimestamp="2026-01-28 16:02:02 +0000 UTC" firstStartedPulling="2026-01-28 16:02:03.696482902 +0000 UTC m=+995.972454403" lastFinishedPulling="2026-01-28 16:02:08.364497103 +0000 UTC m=+1000.640468614" observedRunningTime="2026-01-28 16:02:09.052119437 +0000 UTC m=+1001.328090958" watchObservedRunningTime="2026-01-28 16:02:13.250839325 +0000 UTC m=+1005.526810866" Jan 28 16:02:13 crc kubenswrapper[4903]: I0128 16:02:13.393857 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:13 crc kubenswrapper[4903]: I0128 16:02:13.393942 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:13 crc kubenswrapper[4903]: I0128 16:02:13.451857 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:14 crc kubenswrapper[4903]: I0128 16:02:14.082034 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:14 crc kubenswrapper[4903]: I0128 16:02:14.668028 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8kl85"] Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.058461 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8kl85" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="registry-server" containerID="cri-o://466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632" gracePeriod=2 Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.442522 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.592353 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-catalog-content\") pod \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.592505 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-utilities\") pod \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.592594 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-665sj\" (UniqueName: \"kubernetes.io/projected/ee8fa7bd-cd42-437b-9fb9-b336025d6398-kube-api-access-665sj\") pod \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\" (UID: \"ee8fa7bd-cd42-437b-9fb9-b336025d6398\") " Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.594201 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-utilities" (OuterVolumeSpecName: "utilities") pod "ee8fa7bd-cd42-437b-9fb9-b336025d6398" (UID: "ee8fa7bd-cd42-437b-9fb9-b336025d6398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.601503 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8fa7bd-cd42-437b-9fb9-b336025d6398-kube-api-access-665sj" (OuterVolumeSpecName: "kube-api-access-665sj") pod "ee8fa7bd-cd42-437b-9fb9-b336025d6398" (UID: "ee8fa7bd-cd42-437b-9fb9-b336025d6398"). InnerVolumeSpecName "kube-api-access-665sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.665647 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee8fa7bd-cd42-437b-9fb9-b336025d6398" (UID: "ee8fa7bd-cd42-437b-9fb9-b336025d6398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.694079 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.694130 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee8fa7bd-cd42-437b-9fb9-b336025d6398-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:16 crc kubenswrapper[4903]: I0128 16:02:16.694148 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-665sj\" (UniqueName: \"kubernetes.io/projected/ee8fa7bd-cd42-437b-9fb9-b336025d6398-kube-api-access-665sj\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.065244 4903 generic.go:334] "Generic (PLEG): container finished" podID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerID="466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632" exitCode=0 Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.065298 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kl85" event={"ID":"ee8fa7bd-cd42-437b-9fb9-b336025d6398","Type":"ContainerDied","Data":"466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632"} Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.065325 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8kl85" event={"ID":"ee8fa7bd-cd42-437b-9fb9-b336025d6398","Type":"ContainerDied","Data":"83743d28fd9f01c0925567f452cb9f76568cabb6fa77d9df4d4f94fba53e12de"} Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.065340 4903 scope.go:117] "RemoveContainer" containerID="466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.065398 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8kl85" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.081995 4903 scope.go:117] "RemoveContainer" containerID="1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.093657 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8kl85"] Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.098810 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8kl85"] Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.114431 4903 scope.go:117] "RemoveContainer" containerID="90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.127297 4903 scope.go:117] "RemoveContainer" containerID="466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632" Jan 28 16:02:17 crc kubenswrapper[4903]: E0128 16:02:17.127646 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632\": container with ID starting with 466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632 not found: ID does not exist" containerID="466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.127691 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632"} err="failed to get container status \"466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632\": rpc error: code = NotFound desc = could not find container \"466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632\": container with ID starting with 466fdb1533d406fe538671f7ac5701ea6a83fd8f3c11f14fbebe4c2ed1f93632 not found: ID does not exist" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.127710 4903 scope.go:117] "RemoveContainer" containerID="1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04" Jan 28 16:02:17 crc kubenswrapper[4903]: E0128 16:02:17.127980 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04\": container with ID starting with 1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04 not found: ID does not exist" containerID="1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.128019 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04"} err="failed to get container status \"1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04\": rpc error: code = NotFound desc = could not find container \"1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04\": container with ID starting with 1805a95eb05a30ebb4cb78e8e9b4379b5c863ec00814c0e2ace1314875b79d04 not found: ID does not exist" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.128043 4903 scope.go:117] "RemoveContainer" containerID="90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933" Jan 28 16:02:17 crc kubenswrapper[4903]: E0128 16:02:17.128368 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933\": container with ID starting with 90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933 not found: ID does not exist" containerID="90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933" Jan 28 16:02:17 crc kubenswrapper[4903]: I0128 16:02:17.128398 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933"} err="failed to get container status \"90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933\": rpc error: code = NotFound desc = could not find container \"90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933\": container with ID starting with 90d4795ab274d7231a0bc630d089fe6727043b7a7a116f8a387e15c67ab6a933 not found: ID does not exist" Jan 28 16:02:18 crc kubenswrapper[4903]: I0128 16:02:18.422718 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" path="/var/lib/kubelet/pods/ee8fa7bd-cd42-437b-9fb9-b336025d6398/volumes" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.680214 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75"] Jan 28 16:02:42 crc kubenswrapper[4903]: E0128 16:02:42.681305 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="extract-utilities" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.681324 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="extract-utilities" Jan 28 16:02:42 crc kubenswrapper[4903]: E0128 16:02:42.681345 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="registry-server" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.681352 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="registry-server" Jan 28 16:02:42 crc kubenswrapper[4903]: E0128 16:02:42.681365 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="extract-content" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.681373 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="extract-content" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.681504 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8fa7bd-cd42-437b-9fb9-b336025d6398" containerName="registry-server" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.682028 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.687192 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.688264 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.690242 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-89v6n" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.691773 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-npw2b" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.696235 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.697203 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.699779 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-nlw9j" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.710605 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.724744 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.743632 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.751591 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.752590 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.752592 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stghn\" (UniqueName: \"kubernetes.io/projected/07b182ea-9e7b-4b3c-9bbc-677a6f61b9af-kube-api-access-stghn\") pod \"cinder-operator-controller-manager-7478f7dbf9-99ppz\" (UID: \"07b182ea-9e7b-4b3c-9bbc-677a6f61b9af\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.753091 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wc65\" (UniqueName: \"kubernetes.io/projected/13516b4d-d8ad-48a6-8794-305f46b7a2aa-kube-api-access-5wc65\") pod \"barbican-operator-controller-manager-7f86f8796f-m8d75\" (UID: \"13516b4d-d8ad-48a6-8794-305f46b7a2aa\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.753139 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqfb\" (UniqueName: \"kubernetes.io/projected/7ec3d7c1-5943-4992-b2fd-4538131573f6-kube-api-access-pmqfb\") pod \"designate-operator-controller-manager-b45d7bf98-7vnbg\" (UID: \"7ec3d7c1-5943-4992-b2fd-4538131573f6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.754823 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.755813 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-bf2v4" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.756085 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.758341 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vtrg7" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.781579 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.794623 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.802157 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.803137 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.811804 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wr7n5" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.827513 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.845792 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.847294 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.854189 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wc65\" (UniqueName: \"kubernetes.io/projected/13516b4d-d8ad-48a6-8794-305f46b7a2aa-kube-api-access-5wc65\") pod \"barbican-operator-controller-manager-7f86f8796f-m8d75\" (UID: \"13516b4d-d8ad-48a6-8794-305f46b7a2aa\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.854237 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqfb\" (UniqueName: \"kubernetes.io/projected/7ec3d7c1-5943-4992-b2fd-4538131573f6-kube-api-access-pmqfb\") pod \"designate-operator-controller-manager-b45d7bf98-7vnbg\" (UID: \"7ec3d7c1-5943-4992-b2fd-4538131573f6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.854297 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q48xq\" (UniqueName: \"kubernetes.io/projected/7be2e4ab-c0e6-4a70-9aba-d59133aa071f-kube-api-access-q48xq\") pod \"glance-operator-controller-manager-78fdd796fd-2ks9s\" (UID: \"7be2e4ab-c0e6-4a70-9aba-d59133aa071f\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.854329 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mg2\" (UniqueName: \"kubernetes.io/projected/2f636d64-bf91-43aa-ba24-b7a65cc968e4-kube-api-access-w5mg2\") pod \"horizon-operator-controller-manager-77d5c5b54f-wnkln\" (UID: \"2f636d64-bf91-43aa-ba24-b7a65cc968e4\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.854348 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6t8m\" (UniqueName: \"kubernetes.io/projected/d987a2a1-ec4e-4332-bdd3-8d20e9e35efb-kube-api-access-k6t8m\") pod \"heat-operator-controller-manager-594c8c9d5d-62q56\" (UID: \"d987a2a1-ec4e-4332-bdd3-8d20e9e35efb\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.854388 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stghn\" (UniqueName: \"kubernetes.io/projected/07b182ea-9e7b-4b3c-9bbc-677a6f61b9af-kube-api-access-stghn\") pod \"cinder-operator-controller-manager-7478f7dbf9-99ppz\" (UID: \"07b182ea-9e7b-4b3c-9bbc-677a6f61b9af\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.863204 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5bbk6" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.863438 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.899132 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.912174 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqfb\" (UniqueName: \"kubernetes.io/projected/7ec3d7c1-5943-4992-b2fd-4538131573f6-kube-api-access-pmqfb\") pod \"designate-operator-controller-manager-b45d7bf98-7vnbg\" (UID: \"7ec3d7c1-5943-4992-b2fd-4538131573f6\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.913171 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stghn\" (UniqueName: \"kubernetes.io/projected/07b182ea-9e7b-4b3c-9bbc-677a6f61b9af-kube-api-access-stghn\") pod \"cinder-operator-controller-manager-7478f7dbf9-99ppz\" (UID: \"07b182ea-9e7b-4b3c-9bbc-677a6f61b9af\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.914671 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.915604 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.917449 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-24242" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.920147 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wc65\" (UniqueName: \"kubernetes.io/projected/13516b4d-d8ad-48a6-8794-305f46b7a2aa-kube-api-access-5wc65\") pod \"barbican-operator-controller-manager-7f86f8796f-m8d75\" (UID: \"13516b4d-d8ad-48a6-8794-305f46b7a2aa\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.965332 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.965393 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmfw\" (UniqueName: \"kubernetes.io/projected/7dee66d6-e59c-4cd4-b730-f31b7f3564b2-kube-api-access-jbmfw\") pod \"ironic-operator-controller-manager-598f7747c9-xpxb9\" (UID: \"7dee66d6-e59c-4cd4-b730-f31b7f3564b2\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.965435 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q48xq\" (UniqueName: \"kubernetes.io/projected/7be2e4ab-c0e6-4a70-9aba-d59133aa071f-kube-api-access-q48xq\") pod \"glance-operator-controller-manager-78fdd796fd-2ks9s\" (UID: \"7be2e4ab-c0e6-4a70-9aba-d59133aa071f\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.965473 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mg2\" (UniqueName: \"kubernetes.io/projected/2f636d64-bf91-43aa-ba24-b7a65cc968e4-kube-api-access-w5mg2\") pod \"horizon-operator-controller-manager-77d5c5b54f-wnkln\" (UID: \"2f636d64-bf91-43aa-ba24-b7a65cc968e4\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.965501 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t8m\" (UniqueName: \"kubernetes.io/projected/d987a2a1-ec4e-4332-bdd3-8d20e9e35efb-kube-api-access-k6t8m\") pod \"heat-operator-controller-manager-594c8c9d5d-62q56\" (UID: \"d987a2a1-ec4e-4332-bdd3-8d20e9e35efb\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.965570 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj549\" (UniqueName: \"kubernetes.io/projected/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-kube-api-access-rj549\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.968588 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.977426 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.978371 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.984622 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp"] Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.985116 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-l9plz" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.985694 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" Jan 28 16:02:42 crc kubenswrapper[4903]: I0128 16:02:42.987513 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nlhsh" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.002063 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.004134 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6t8m\" (UniqueName: \"kubernetes.io/projected/d987a2a1-ec4e-4332-bdd3-8d20e9e35efb-kube-api-access-k6t8m\") pod \"heat-operator-controller-manager-594c8c9d5d-62q56\" (UID: \"d987a2a1-ec4e-4332-bdd3-8d20e9e35efb\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.009102 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mg2\" (UniqueName: \"kubernetes.io/projected/2f636d64-bf91-43aa-ba24-b7a65cc968e4-kube-api-access-w5mg2\") pod \"horizon-operator-controller-manager-77d5c5b54f-wnkln\" (UID: \"2f636d64-bf91-43aa-ba24-b7a65cc968e4\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.010689 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q48xq\" (UniqueName: \"kubernetes.io/projected/7be2e4ab-c0e6-4a70-9aba-d59133aa071f-kube-api-access-q48xq\") pod \"glance-operator-controller-manager-78fdd796fd-2ks9s\" (UID: \"7be2e4ab-c0e6-4a70-9aba-d59133aa071f\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.010771 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.011790 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.015917 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lvng5" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.016337 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.017600 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.027923 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.029022 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.030083 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.031550 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-phw29" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.048173 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.048942 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.054773 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.067130 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj549\" (UniqueName: \"kubernetes.io/projected/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-kube-api-access-rj549\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.067221 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx87g\" (UniqueName: \"kubernetes.io/projected/1e436aa5-21ea-4f24-8144-b8800f7286d3-kube-api-access-dx87g\") pod \"keystone-operator-controller-manager-b8b6d4659-2rvjx\" (UID: \"1e436aa5-21ea-4f24-8144-b8800f7286d3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.067264 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8tkh\" (UniqueName: \"kubernetes.io/projected/2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e-kube-api-access-l8tkh\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-pp8td\" (UID: \"2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.067299 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.067353 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l28xr\" (UniqueName: \"kubernetes.io/projected/fc7c6b24-fa62-48ac-8eca-4b4055313f60-kube-api-access-l28xr\") pod \"manila-operator-controller-manager-78c6999f6f-bf2vp\" (UID: \"fc7c6b24-fa62-48ac-8eca-4b4055313f60\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.067600 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbmfw\" (UniqueName: \"kubernetes.io/projected/7dee66d6-e59c-4cd4-b730-f31b7f3564b2-kube-api-access-jbmfw\") pod \"ironic-operator-controller-manager-598f7747c9-xpxb9\" (UID: \"7dee66d6-e59c-4cd4-b730-f31b7f3564b2\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.068523 4903 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.068603 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert podName:812b8d8c-d506-46e8-a049-9d3b6d3c05e9 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:43.568585954 +0000 UTC m=+1035.844557465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert") pod "infra-operator-controller-manager-694cf4f878-m2s8p" (UID: "812b8d8c-d506-46e8-a049-9d3b6d3c05e9") : secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.073471 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-874fz"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.074273 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.077756 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.079597 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-874fz"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.086517 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-f9rp2" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.088350 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.089720 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.098977 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.099869 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.100156 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.100345 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fd6sv" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.105516 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7l5vw" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.105642 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.109137 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbmfw\" (UniqueName: \"kubernetes.io/projected/7dee66d6-e59c-4cd4-b730-f31b7f3564b2-kube-api-access-jbmfw\") pod \"ironic-operator-controller-manager-598f7747c9-xpxb9\" (UID: \"7dee66d6-e59c-4cd4-b730-f31b7f3564b2\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.111743 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.119696 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.130933 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.131767 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.134686 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nzkkz" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.137256 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.138031 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.142504 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj549\" (UniqueName: \"kubernetes.io/projected/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-kube-api-access-rj549\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.144833 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gsns7" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.152760 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.153176 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.158861 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168366 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l28xr\" (UniqueName: \"kubernetes.io/projected/fc7c6b24-fa62-48ac-8eca-4b4055313f60-kube-api-access-l28xr\") pod \"manila-operator-controller-manager-78c6999f6f-bf2vp\" (UID: \"fc7c6b24-fa62-48ac-8eca-4b4055313f60\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168406 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnzm\" (UniqueName: \"kubernetes.io/projected/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-kube-api-access-rrnzm\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168431 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm55x\" (UniqueName: \"kubernetes.io/projected/f444752b-b039-47d4-b969-c8f8bcdcc4df-kube-api-access-hm55x\") pod \"neutron-operator-controller-manager-78d58447c5-lxn64\" (UID: \"f444752b-b039-47d4-b969-c8f8bcdcc4df\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168492 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx87g\" (UniqueName: \"kubernetes.io/projected/1e436aa5-21ea-4f24-8144-b8800f7286d3-kube-api-access-dx87g\") pod \"keystone-operator-controller-manager-b8b6d4659-2rvjx\" (UID: \"1e436aa5-21ea-4f24-8144-b8800f7286d3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168513 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8tkh\" (UniqueName: \"kubernetes.io/projected/2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e-kube-api-access-l8tkh\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-pp8td\" (UID: \"2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168559 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168576 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcsr\" (UniqueName: \"kubernetes.io/projected/51460248-29df-4549-bb85-decda4cec14b-kube-api-access-8pcsr\") pod \"octavia-operator-controller-manager-5f4cd88d46-9g75t\" (UID: \"51460248-29df-4549-bb85-decda4cec14b\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.168595 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8nhw\" (UniqueName: \"kubernetes.io/projected/3bd7a3f4-2963-4ac2-8f33-83a667789a33-kube-api-access-k8nhw\") pod \"nova-operator-controller-manager-7bdb645866-874fz\" (UID: \"3bd7a3f4-2963-4ac2-8f33-83a667789a33\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.181590 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.182436 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.186187 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.187882 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6f5kj" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.189049 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.195108 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-47vzj" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.205855 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx87g\" (UniqueName: \"kubernetes.io/projected/1e436aa5-21ea-4f24-8144-b8800f7286d3-kube-api-access-dx87g\") pod \"keystone-operator-controller-manager-b8b6d4659-2rvjx\" (UID: \"1e436aa5-21ea-4f24-8144-b8800f7286d3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.205923 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.223995 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.240339 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l28xr\" (UniqueName: \"kubernetes.io/projected/fc7c6b24-fa62-48ac-8eca-4b4055313f60-kube-api-access-l28xr\") pod \"manila-operator-controller-manager-78c6999f6f-bf2vp\" (UID: \"fc7c6b24-fa62-48ac-8eca-4b4055313f60\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.258224 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8tkh\" (UniqueName: \"kubernetes.io/projected/2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e-kube-api-access-l8tkh\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-pp8td\" (UID: \"2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.268763 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.269803 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.269843 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pcsr\" (UniqueName: \"kubernetes.io/projected/51460248-29df-4549-bb85-decda4cec14b-kube-api-access-8pcsr\") pod \"octavia-operator-controller-manager-5f4cd88d46-9g75t\" (UID: \"51460248-29df-4549-bb85-decda4cec14b\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.269863 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8nhw\" (UniqueName: \"kubernetes.io/projected/3bd7a3f4-2963-4ac2-8f33-83a667789a33-kube-api-access-k8nhw\") pod \"nova-operator-controller-manager-7bdb645866-874fz\" (UID: \"3bd7a3f4-2963-4ac2-8f33-83a667789a33\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.269904 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw5dm\" (UniqueName: \"kubernetes.io/projected/a18a6add-e3ae-4914-93c7-0ac2ec35b53a-kube-api-access-rw5dm\") pod \"telemetry-operator-controller-manager-85cd9769bb-c64gq\" (UID: \"a18a6add-e3ae-4914-93c7-0ac2ec35b53a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.269927 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrnzm\" (UniqueName: \"kubernetes.io/projected/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-kube-api-access-rrnzm\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.269946 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm55x\" (UniqueName: \"kubernetes.io/projected/f444752b-b039-47d4-b969-c8f8bcdcc4df-kube-api-access-hm55x\") pod \"neutron-operator-controller-manager-78d58447c5-lxn64\" (UID: \"f444752b-b039-47d4-b969-c8f8bcdcc4df\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.270017 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtkts\" (UniqueName: \"kubernetes.io/projected/726d95fa-5b8a-4c1b-ae91-54f53b1141a9-kube-api-access-qtkts\") pod \"placement-operator-controller-manager-79d5ccc684-shc7w\" (UID: \"726d95fa-5b8a-4c1b-ae91-54f53b1141a9\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.270069 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbcz\" (UniqueName: \"kubernetes.io/projected/a527edb1-eb6a-4b49-b167-cde14f2dc01f-kube-api-access-nmbcz\") pod \"swift-operator-controller-manager-547cbdb99f-x74hn\" (UID: \"a527edb1-eb6a-4b49-b167-cde14f2dc01f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.270095 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6bs\" (UniqueName: \"kubernetes.io/projected/65141c35-eda1-43fa-ae32-9f86b4bf5315-kube-api-access-jc6bs\") pod \"ovn-operator-controller-manager-6f75f45d54-ktd5l\" (UID: \"65141c35-eda1-43fa-ae32-9f86b4bf5315\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.270248 4903 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.270287 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert podName:f87bd6ee-e507-4dd8-b987-4d67aa7d5d85 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:43.77027256 +0000 UTC m=+1036.046244071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" (UID: "f87bd6ee-e507-4dd8-b987-4d67aa7d5d85") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.284345 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.292103 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.313771 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrnzm\" (UniqueName: \"kubernetes.io/projected/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-kube-api-access-rrnzm\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.313794 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.320349 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hz5zv" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.326163 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm55x\" (UniqueName: \"kubernetes.io/projected/f444752b-b039-47d4-b969-c8f8bcdcc4df-kube-api-access-hm55x\") pod \"neutron-operator-controller-manager-78d58447c5-lxn64\" (UID: \"f444752b-b039-47d4-b969-c8f8bcdcc4df\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.348479 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8nhw\" (UniqueName: \"kubernetes.io/projected/3bd7a3f4-2963-4ac2-8f33-83a667789a33-kube-api-access-k8nhw\") pod \"nova-operator-controller-manager-7bdb645866-874fz\" (UID: \"3bd7a3f4-2963-4ac2-8f33-83a667789a33\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.356642 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pcsr\" (UniqueName: \"kubernetes.io/projected/51460248-29df-4549-bb85-decda4cec14b-kube-api-access-8pcsr\") pod \"octavia-operator-controller-manager-5f4cd88d46-9g75t\" (UID: \"51460248-29df-4549-bb85-decda4cec14b\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.370127 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.380390 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm9gd\" (UniqueName: \"kubernetes.io/projected/8b4e8a0c-bd20-4a9e-8b40-2f14d601325f-kube-api-access-mm9gd\") pod \"test-operator-controller-manager-69797bbcbd-6qzt6\" (UID: \"8b4e8a0c-bd20-4a9e-8b40-2f14d601325f\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.380579 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbcz\" (UniqueName: \"kubernetes.io/projected/a527edb1-eb6a-4b49-b167-cde14f2dc01f-kube-api-access-nmbcz\") pod \"swift-operator-controller-manager-547cbdb99f-x74hn\" (UID: \"a527edb1-eb6a-4b49-b167-cde14f2dc01f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.380669 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6bs\" (UniqueName: \"kubernetes.io/projected/65141c35-eda1-43fa-ae32-9f86b4bf5315-kube-api-access-jc6bs\") pod \"ovn-operator-controller-manager-6f75f45d54-ktd5l\" (UID: \"65141c35-eda1-43fa-ae32-9f86b4bf5315\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.381951 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw5dm\" (UniqueName: \"kubernetes.io/projected/a18a6add-e3ae-4914-93c7-0ac2ec35b53a-kube-api-access-rw5dm\") pod \"telemetry-operator-controller-manager-85cd9769bb-c64gq\" (UID: \"a18a6add-e3ae-4914-93c7-0ac2ec35b53a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.382121 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtkts\" (UniqueName: \"kubernetes.io/projected/726d95fa-5b8a-4c1b-ae91-54f53b1141a9-kube-api-access-qtkts\") pod \"placement-operator-controller-manager-79d5ccc684-shc7w\" (UID: \"726d95fa-5b8a-4c1b-ae91-54f53b1141a9\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.396994 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-78xkf"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.419167 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.436391 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hh8xw" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.439881 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw5dm\" (UniqueName: \"kubernetes.io/projected/a18a6add-e3ae-4914-93c7-0ac2ec35b53a-kube-api-access-rw5dm\") pod \"telemetry-operator-controller-manager-85cd9769bb-c64gq\" (UID: \"a18a6add-e3ae-4914-93c7-0ac2ec35b53a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.444156 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.447042 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.450453 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6bs\" (UniqueName: \"kubernetes.io/projected/65141c35-eda1-43fa-ae32-9f86b4bf5315-kube-api-access-jc6bs\") pod \"ovn-operator-controller-manager-6f75f45d54-ktd5l\" (UID: \"65141c35-eda1-43fa-ae32-9f86b4bf5315\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.468252 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.474288 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtkts\" (UniqueName: \"kubernetes.io/projected/726d95fa-5b8a-4c1b-ae91-54f53b1141a9-kube-api-access-qtkts\") pod \"placement-operator-controller-manager-79d5ccc684-shc7w\" (UID: \"726d95fa-5b8a-4c1b-ae91-54f53b1141a9\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.481817 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-78xkf"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.485913 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbcz\" (UniqueName: \"kubernetes.io/projected/a527edb1-eb6a-4b49-b167-cde14f2dc01f-kube-api-access-nmbcz\") pod \"swift-operator-controller-manager-547cbdb99f-x74hn\" (UID: \"a527edb1-eb6a-4b49-b167-cde14f2dc01f\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.487655 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9gd\" (UniqueName: \"kubernetes.io/projected/8b4e8a0c-bd20-4a9e-8b40-2f14d601325f-kube-api-access-mm9gd\") pod \"test-operator-controller-manager-69797bbcbd-6qzt6\" (UID: \"8b4e8a0c-bd20-4a9e-8b40-2f14d601325f\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.495085 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.530146 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.531165 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.563064 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.565737 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.568978 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.579086 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.579160 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.579351 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-r85c6" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.589828 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.590909 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.591743 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9gd\" (UniqueName: \"kubernetes.io/projected/8b4e8a0c-bd20-4a9e-8b40-2f14d601325f-kube-api-access-mm9gd\") pod \"test-operator-controller-manager-69797bbcbd-6qzt6\" (UID: \"8b4e8a0c-bd20-4a9e-8b40-2f14d601325f\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.599426 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h69m5\" (UniqueName: \"kubernetes.io/projected/b8e4b217-041d-4097-9ede-0e6ea89353a4-kube-api-access-h69m5\") pod \"watcher-operator-controller-manager-564965969-78xkf\" (UID: \"b8e4b217-041d-4097-9ede-0e6ea89353a4\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.600831 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d7s4q" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.601749 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.601884 4903 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.601926 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert podName:812b8d8c-d506-46e8-a049-9d3b6d3c05e9 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:44.601910023 +0000 UTC m=+1036.877881534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert") pod "infra-operator-controller-manager-694cf4f878-m2s8p" (UID: "812b8d8c-d506-46e8-a049-9d3b6d3c05e9") : secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.634922 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.657070 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.679128 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.699064 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.705325 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h69m5\" (UniqueName: \"kubernetes.io/projected/b8e4b217-041d-4097-9ede-0e6ea89353a4-kube-api-access-h69m5\") pod \"watcher-operator-controller-manager-564965969-78xkf\" (UID: \"b8e4b217-041d-4097-9ede-0e6ea89353a4\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.705758 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mp5k\" (UniqueName: \"kubernetes.io/projected/55ba9bac-caa2-495d-b933-661303f3c265-kube-api-access-8mp5k\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.705881 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.705969 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.706039 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4jp\" (UniqueName: \"kubernetes.io/projected/963c0e0a-fd5c-4156-ae48-02c4573137f1-kube-api-access-sz4jp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rf2vw\" (UID: \"963c0e0a-fd5c-4156-ae48-02c4573137f1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.730890 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h69m5\" (UniqueName: \"kubernetes.io/projected/b8e4b217-041d-4097-9ede-0e6ea89353a4-kube-api-access-h69m5\") pod \"watcher-operator-controller-manager-564965969-78xkf\" (UID: \"b8e4b217-041d-4097-9ede-0e6ea89353a4\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.775819 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.803117 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.807201 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.807249 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4jp\" (UniqueName: \"kubernetes.io/projected/963c0e0a-fd5c-4156-ae48-02c4573137f1-kube-api-access-sz4jp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rf2vw\" (UID: \"963c0e0a-fd5c-4156-ae48-02c4573137f1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.807518 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.807553 4903 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.807620 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:44.307588669 +0000 UTC m=+1036.583560180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "metrics-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.807640 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mp5k\" (UniqueName: \"kubernetes.io/projected/55ba9bac-caa2-495d-b933-661303f3c265-kube-api-access-8mp5k\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.807686 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.807701 4903 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.807761 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert podName:f87bd6ee-e507-4dd8-b987-4d67aa7d5d85 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:44.807741103 +0000 UTC m=+1037.083712694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" (UID: "f87bd6ee-e507-4dd8-b987-4d67aa7d5d85") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.807783 4903 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: E0128 16:02:43.807807 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:44.307800764 +0000 UTC m=+1036.583772265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "webhook-server-cert" not found Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.829019 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mp5k\" (UniqueName: \"kubernetes.io/projected/55ba9bac-caa2-495d-b933-661303f3c265-kube-api-access-8mp5k\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.829498 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4jp\" (UniqueName: \"kubernetes.io/projected/963c0e0a-fd5c-4156-ae48-02c4573137f1-kube-api-access-sz4jp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rf2vw\" (UID: \"963c0e0a-fd5c-4156-ae48-02c4573137f1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.856140 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75"] Jan 28 16:02:43 crc kubenswrapper[4903]: I0128 16:02:43.920647 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.117662 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.135246 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s"] Jan 28 16:02:44 crc kubenswrapper[4903]: W0128 16:02:44.200669 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be2e4ab_c0e6_4a70_9aba_d59133aa071f.slice/crio-1f48c09d48294f9ae8c29dec3a36661f2574954ceb02ae2d5364824608c0ae52 WatchSource:0}: Error finding container 1f48c09d48294f9ae8c29dec3a36661f2574954ceb02ae2d5364824608c0ae52: Status 404 returned error can't find the container with id 1f48c09d48294f9ae8c29dec3a36661f2574954ceb02ae2d5364824608c0ae52 Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.316878 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.317055 4903 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.317119 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:45.317099119 +0000 UTC m=+1037.593070740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "metrics-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.317210 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.317408 4903 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.317460 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:45.317444078 +0000 UTC m=+1037.593415589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "webhook-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.544520 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.552550 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln"] Jan 28 16:02:44 crc kubenswrapper[4903]: W0128 16:02:44.563455 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dee66d6_e59c_4cd4_b730_f31b7f3564b2.slice/crio-44fdc2b389f8b081b95b4900b57cf3d17cc9b21fb42041049664b1501a062959 WatchSource:0}: Error finding container 44fdc2b389f8b081b95b4900b57cf3d17cc9b21fb42041049664b1501a062959: Status 404 returned error can't find the container with id 44fdc2b389f8b081b95b4900b57cf3d17cc9b21fb42041049664b1501a062959 Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.564289 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9"] Jan 28 16:02:44 crc kubenswrapper[4903]: W0128 16:02:44.566937 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ec3d7c1_5943_4992_b2fd_4538131573f6.slice/crio-450cc8900f0d838ecee74832d75534f323c0bf534324a4b61599b4383cccad04 WatchSource:0}: Error finding container 450cc8900f0d838ecee74832d75534f323c0bf534324a4b61599b4383cccad04: Status 404 returned error can't find the container with id 450cc8900f0d838ecee74832d75534f323c0bf534324a4b61599b4383cccad04 Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.574067 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.621811 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.622014 4903 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.622116 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert podName:812b8d8c-d506-46e8-a049-9d3b6d3c05e9 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:46.622090946 +0000 UTC m=+1038.898062507 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert") pod "infra-operator-controller-manager-694cf4f878-m2s8p" (UID: "812b8d8c-d506-46e8-a049-9d3b6d3c05e9") : secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.823561 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.823804 4903 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: E0128 16:02:44.823870 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert podName:f87bd6ee-e507-4dd8-b987-4d67aa7d5d85 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:46.823846234 +0000 UTC m=+1039.099817745 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" (UID: "f87bd6ee-e507-4dd8-b987-4d67aa7d5d85") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.884182 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" event={"ID":"7dee66d6-e59c-4cd4-b730-f31b7f3564b2","Type":"ContainerStarted","Data":"44fdc2b389f8b081b95b4900b57cf3d17cc9b21fb42041049664b1501a062959"} Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.888107 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" event={"ID":"2f636d64-bf91-43aa-ba24-b7a65cc968e4","Type":"ContainerStarted","Data":"48ad2ccb90d497a9cce984c85ce70b8e1830aaba84b909b27acd31ff7f639aa8"} Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.891564 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" event={"ID":"07b182ea-9e7b-4b3c-9bbc-677a6f61b9af","Type":"ContainerStarted","Data":"069be9beb8d0071c4d0b79e43b985e8db415ffa44f135c7b9a473df3b6eb4630"} Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.893720 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" event={"ID":"7be2e4ab-c0e6-4a70-9aba-d59133aa071f","Type":"ContainerStarted","Data":"1f48c09d48294f9ae8c29dec3a36661f2574954ceb02ae2d5364824608c0ae52"} Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.894741 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" event={"ID":"7ec3d7c1-5943-4992-b2fd-4538131573f6","Type":"ContainerStarted","Data":"450cc8900f0d838ecee74832d75534f323c0bf534324a4b61599b4383cccad04"} Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.896463 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" event={"ID":"13516b4d-d8ad-48a6-8794-305f46b7a2aa","Type":"ContainerStarted","Data":"398b8bbf0b06e7df89fb5a50739829c52d0de074692ef9f2f739a4312b6504b3"} Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.897470 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" event={"ID":"d987a2a1-ec4e-4332-bdd3-8d20e9e35efb","Type":"ContainerStarted","Data":"ec14cc9ea4a7f2712561139ec7f6ddb5a05d5fa7a5f08b8391ecddf3e9d8945f"} Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.898612 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.911155 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.922463 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.949617 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.958465 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-874fz"] Jan 28 16:02:44 crc kubenswrapper[4903]: I0128 16:02:44.980411 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6"] Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:44.999964 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rw5dm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-c64gq_openstack-operators(a18a6add-e3ae-4914-93c7-0ac2ec35b53a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.000058 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mm9gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-6qzt6_openstack-operators(8b4e8a0c-bd20-4a9e-8b40-2f14d601325f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.000936 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pcsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-9g75t_openstack-operators(51460248-29df-4549-bb85-decda4cec14b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.001140 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmbcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-x74hn_openstack-operators(a527edb1-eb6a-4b49-b167-cde14f2dc01f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.001294 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtkts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-shc7w_openstack-operators(726d95fa-5b8a-4c1b-ae91-54f53b1141a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.001455 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" podUID="8b4e8a0c-bd20-4a9e-8b40-2f14d601325f" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.001520 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" podUID="a18a6add-e3ae-4914-93c7-0ac2ec35b53a" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.007905 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" podUID="726d95fa-5b8a-4c1b-ae91-54f53b1141a9" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.007974 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" podUID="51460248-29df-4549-bb85-decda4cec14b" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.008245 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" podUID="a527edb1-eb6a-4b49-b167-cde14f2dc01f" Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.025781 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw"] Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.038562 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-78xkf"] Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.045443 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t"] Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.051413 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w"] Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.057657 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn"] Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.063659 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq"] Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.067108 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l"] Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.357281 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.357355 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.357568 4903 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.357639 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:47.357619356 +0000 UTC m=+1039.633590867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "metrics-server-cert" not found Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.361415 4903 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.362382 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:47.362362576 +0000 UTC m=+1039.638334087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "webhook-server-cert" not found Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.919346 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" event={"ID":"2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e","Type":"ContainerStarted","Data":"c0b59a26c4dde42ece2396361df7712d86e1b7da56121c5773ea3e83d32f4292"} Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.924152 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" event={"ID":"fc7c6b24-fa62-48ac-8eca-4b4055313f60","Type":"ContainerStarted","Data":"fe0b0963a9f4d8f4ee18e4bb0be692a54135248a1e2f0feaeb2cae9653df734c"} Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.925575 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" event={"ID":"a527edb1-eb6a-4b49-b167-cde14f2dc01f","Type":"ContainerStarted","Data":"03f48047cc8b1f7ba8d87073982e3c0b0ac8c82a5aa56a5efa82a48404397d3f"} Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.927650 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" podUID="a527edb1-eb6a-4b49-b167-cde14f2dc01f" Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.929846 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" event={"ID":"963c0e0a-fd5c-4156-ae48-02c4573137f1","Type":"ContainerStarted","Data":"887f19ff75c0d5d77b51ef0b12a07d174070d9a7fc2f103a2374254ddcccdb29"} Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.933512 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" event={"ID":"b8e4b217-041d-4097-9ede-0e6ea89353a4","Type":"ContainerStarted","Data":"cece15a51c3a244c8c0cef988974ec38fbb154b10ac65c1eb034db462f3aafe2"} Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.954773 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" event={"ID":"8b4e8a0c-bd20-4a9e-8b40-2f14d601325f","Type":"ContainerStarted","Data":"256a6d7e646b0e6582bf6a1d47a20f28ddce1d18c30c15ce7289f6ee9f01a46f"} Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.960045 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" podUID="8b4e8a0c-bd20-4a9e-8b40-2f14d601325f" Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.965094 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" event={"ID":"1e436aa5-21ea-4f24-8144-b8800f7286d3","Type":"ContainerStarted","Data":"0ba59d477ce2d1b0d86274d434d6c095d5221814fbba6950c457b3ec0cf0da94"} Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.980957 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" event={"ID":"f444752b-b039-47d4-b969-c8f8bcdcc4df","Type":"ContainerStarted","Data":"b7add4c2997455e068620f171ecabb4bc5c9f537f683a35502bc87c1a851c8ff"} Jan 28 16:02:45 crc kubenswrapper[4903]: I0128 16:02:45.987300 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" event={"ID":"a18a6add-e3ae-4914-93c7-0ac2ec35b53a","Type":"ContainerStarted","Data":"c0371f13ac08e28a4289ac98fff3988bf9e10ab8b1f3b5672805391d787a6f37"} Jan 28 16:02:45 crc kubenswrapper[4903]: E0128 16:02:45.989045 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" podUID="a18a6add-e3ae-4914-93c7-0ac2ec35b53a" Jan 28 16:02:46 crc kubenswrapper[4903]: I0128 16:02:46.008544 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" event={"ID":"65141c35-eda1-43fa-ae32-9f86b4bf5315","Type":"ContainerStarted","Data":"d5e792c790cff52d36a70f09cc392b6c335bc6ebe5230931798c9058cd3e9f6f"} Jan 28 16:02:46 crc kubenswrapper[4903]: I0128 16:02:46.011629 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" event={"ID":"51460248-29df-4549-bb85-decda4cec14b","Type":"ContainerStarted","Data":"7e8feb69c34a79bec740d8565a5702ba71e2493cb677fee22d1399a423f62f0f"} Jan 28 16:02:46 crc kubenswrapper[4903]: E0128 16:02:46.013762 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" podUID="51460248-29df-4549-bb85-decda4cec14b" Jan 28 16:02:46 crc kubenswrapper[4903]: I0128 16:02:46.016418 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" event={"ID":"3bd7a3f4-2963-4ac2-8f33-83a667789a33","Type":"ContainerStarted","Data":"03e82576576dab28b35dfdb40b0ddeb93b0b6da66a0d41dd70995e1d9f69033c"} Jan 28 16:02:46 crc kubenswrapper[4903]: I0128 16:02:46.018745 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" event={"ID":"726d95fa-5b8a-4c1b-ae91-54f53b1141a9","Type":"ContainerStarted","Data":"8468e5bf3dbe92264d4c9c4038927c7950f63d657b9e71f531bf242a7cdabb0d"} Jan 28 16:02:46 crc kubenswrapper[4903]: E0128 16:02:46.020519 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" podUID="726d95fa-5b8a-4c1b-ae91-54f53b1141a9" Jan 28 16:02:46 crc kubenswrapper[4903]: I0128 16:02:46.716399 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:46 crc kubenswrapper[4903]: E0128 16:02:46.716565 4903 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:46 crc kubenswrapper[4903]: E0128 16:02:46.716634 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert podName:812b8d8c-d506-46e8-a049-9d3b6d3c05e9 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:50.716616578 +0000 UTC m=+1042.992588089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert") pod "infra-operator-controller-manager-694cf4f878-m2s8p" (UID: "812b8d8c-d506-46e8-a049-9d3b6d3c05e9") : secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:46 crc kubenswrapper[4903]: I0128 16:02:46.919744 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:46 crc kubenswrapper[4903]: E0128 16:02:46.919932 4903 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:46 crc kubenswrapper[4903]: E0128 16:02:46.920007 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert podName:f87bd6ee-e507-4dd8-b987-4d67aa7d5d85 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:50.91998942 +0000 UTC m=+1043.195960931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" (UID: "f87bd6ee-e507-4dd8-b987-4d67aa7d5d85") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.027776 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" podUID="a18a6add-e3ae-4914-93c7-0ac2ec35b53a" Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.027801 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" podUID="a527edb1-eb6a-4b49-b167-cde14f2dc01f" Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.027980 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" podUID="8b4e8a0c-bd20-4a9e-8b40-2f14d601325f" Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.028301 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" podUID="726d95fa-5b8a-4c1b-ae91-54f53b1141a9" Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.029794 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" podUID="51460248-29df-4549-bb85-decda4cec14b" Jan 28 16:02:47 crc kubenswrapper[4903]: I0128 16:02:47.427442 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:47 crc kubenswrapper[4903]: I0128 16:02:47.427577 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.427675 4903 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.427722 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:51.427709302 +0000 UTC m=+1043.703680813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "webhook-server-cert" not found Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.427755 4903 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 16:02:47 crc kubenswrapper[4903]: E0128 16:02:47.427830 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:51.427813585 +0000 UTC m=+1043.703785086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "metrics-server-cert" not found Jan 28 16:02:50 crc kubenswrapper[4903]: I0128 16:02:50.777286 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:50 crc kubenswrapper[4903]: E0128 16:02:50.777770 4903 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:50 crc kubenswrapper[4903]: E0128 16:02:50.777817 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert podName:812b8d8c-d506-46e8-a049-9d3b6d3c05e9 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:58.777802894 +0000 UTC m=+1051.053774395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert") pod "infra-operator-controller-manager-694cf4f878-m2s8p" (UID: "812b8d8c-d506-46e8-a049-9d3b6d3c05e9") : secret "infra-operator-webhook-server-cert" not found Jan 28 16:02:50 crc kubenswrapper[4903]: I0128 16:02:50.980047 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:50 crc kubenswrapper[4903]: E0128 16:02:50.980229 4903 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:50 crc kubenswrapper[4903]: E0128 16:02:50.980275 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert podName:f87bd6ee-e507-4dd8-b987-4d67aa7d5d85 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:58.98026212 +0000 UTC m=+1051.256233631 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert") pod "openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" (UID: "f87bd6ee-e507-4dd8-b987-4d67aa7d5d85") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 16:02:51 crc kubenswrapper[4903]: I0128 16:02:51.494914 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:51 crc kubenswrapper[4903]: I0128 16:02:51.495203 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:51 crc kubenswrapper[4903]: E0128 16:02:51.495039 4903 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 16:02:51 crc kubenswrapper[4903]: E0128 16:02:51.495277 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:59.495259401 +0000 UTC m=+1051.771230912 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "webhook-server-cert" not found Jan 28 16:02:51 crc kubenswrapper[4903]: E0128 16:02:51.495337 4903 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 16:02:51 crc kubenswrapper[4903]: E0128 16:02:51.495369 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:02:59.495359793 +0000 UTC m=+1051.771331304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "metrics-server-cert" not found Jan 28 16:02:56 crc kubenswrapper[4903]: E0128 16:02:56.414707 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 28 16:02:56 crc kubenswrapper[4903]: E0128 16:02:56.415247 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jbmfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-xpxb9_openstack-operators(7dee66d6-e59c-4cd4-b730-f31b7f3564b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:02:56 crc kubenswrapper[4903]: E0128 16:02:56.416765 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" podUID="7dee66d6-e59c-4cd4-b730-f31b7f3564b2" Jan 28 16:02:57 crc kubenswrapper[4903]: E0128 16:02:57.104738 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 28 16:02:57 crc kubenswrapper[4903]: E0128 16:02:57.104929 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jc6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-ktd5l_openstack-operators(65141c35-eda1-43fa-ae32-9f86b4bf5315): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:02:57 crc kubenswrapper[4903]: E0128 16:02:57.106286 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" podUID="7dee66d6-e59c-4cd4-b730-f31b7f3564b2" Jan 28 16:02:57 crc kubenswrapper[4903]: E0128 16:02:57.107299 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" podUID="65141c35-eda1-43fa-ae32-9f86b4bf5315" Jan 28 16:02:57 crc kubenswrapper[4903]: E0128 16:02:57.662174 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 28 16:02:57 crc kubenswrapper[4903]: E0128 16:02:57.662328 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h69m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-78xkf_openstack-operators(b8e4b217-041d-4097-9ede-0e6ea89353a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:02:57 crc kubenswrapper[4903]: E0128 16:02:57.663568 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" podUID="b8e4b217-041d-4097-9ede-0e6ea89353a4" Jan 28 16:02:58 crc kubenswrapper[4903]: E0128 16:02:58.113466 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" podUID="b8e4b217-041d-4097-9ede-0e6ea89353a4" Jan 28 16:02:58 crc kubenswrapper[4903]: E0128 16:02:58.113927 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" podUID="65141c35-eda1-43fa-ae32-9f86b4bf5315" Jan 28 16:02:58 crc kubenswrapper[4903]: I0128 16:02:58.811223 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:58 crc kubenswrapper[4903]: I0128 16:02:58.818205 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/812b8d8c-d506-46e8-a049-9d3b6d3c05e9-cert\") pod \"infra-operator-controller-manager-694cf4f878-m2s8p\" (UID: \"812b8d8c-d506-46e8-a049-9d3b6d3c05e9\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:58 crc kubenswrapper[4903]: I0128 16:02:58.839114 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:02:59 crc kubenswrapper[4903]: I0128 16:02:59.013722 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:59 crc kubenswrapper[4903]: I0128 16:02:59.018986 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f87bd6ee-e507-4dd8-b987-4d67aa7d5d85-cert\") pod \"openstack-baremetal-operator-controller-manager-5b5d4999dc8727j\" (UID: \"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:59 crc kubenswrapper[4903]: I0128 16:02:59.199720 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:02:59 crc kubenswrapper[4903]: I0128 16:02:59.521474 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:59 crc kubenswrapper[4903]: I0128 16:02:59.521612 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:02:59 crc kubenswrapper[4903]: E0128 16:02:59.521798 4903 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 16:02:59 crc kubenswrapper[4903]: E0128 16:02:59.521881 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs podName:55ba9bac-caa2-495d-b933-661303f3c265 nodeName:}" failed. No retries permitted until 2026-01-28 16:03:15.521858926 +0000 UTC m=+1067.797830457 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs") pod "openstack-operator-controller-manager-9f67d7-dm5mg" (UID: "55ba9bac-caa2-495d-b933-661303f3c265") : secret "webhook-server-cert" not found Jan 28 16:02:59 crc kubenswrapper[4903]: I0128 16:02:59.526662 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-metrics-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:03:03 crc kubenswrapper[4903]: E0128 16:03:03.199104 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 28 16:03:03 crc kubenswrapper[4903]: E0128 16:03:03.199575 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l28xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-bf2vp_openstack-operators(fc7c6b24-fa62-48ac-8eca-4b4055313f60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:03:03 crc kubenswrapper[4903]: E0128 16:03:03.200755 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" podUID="fc7c6b24-fa62-48ac-8eca-4b4055313f60" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.120218 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.120665 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hm55x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-lxn64_openstack-operators(f444752b-b039-47d4-b969-c8f8bcdcc4df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.121766 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" podUID="f444752b-b039-47d4-b969-c8f8bcdcc4df" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.157953 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" podUID="fc7c6b24-fa62-48ac-8eca-4b4055313f60" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.159428 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" podUID="f444752b-b039-47d4-b969-c8f8bcdcc4df" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.685802 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.685978 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sz4jp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rf2vw_openstack-operators(963c0e0a-fd5c-4156-ae48-02c4573137f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:03:04 crc kubenswrapper[4903]: E0128 16:03:04.687629 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" podUID="963c0e0a-fd5c-4156-ae48-02c4573137f1" Jan 28 16:03:05 crc kubenswrapper[4903]: E0128 16:03:05.170640 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" podUID="963c0e0a-fd5c-4156-ae48-02c4573137f1" Jan 28 16:03:05 crc kubenswrapper[4903]: E0128 16:03:05.337040 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 28 16:03:05 crc kubenswrapper[4903]: E0128 16:03:05.337195 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8nhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-874fz_openstack-operators(3bd7a3f4-2963-4ac2-8f33-83a667789a33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:03:05 crc kubenswrapper[4903]: E0128 16:03:05.338490 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" podUID="3bd7a3f4-2963-4ac2-8f33-83a667789a33" Jan 28 16:03:05 crc kubenswrapper[4903]: E0128 16:03:05.832792 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 28 16:03:05 crc kubenswrapper[4903]: E0128 16:03:05.833147 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dx87g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-2rvjx_openstack-operators(1e436aa5-21ea-4f24-8144-b8800f7286d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:03:05 crc kubenswrapper[4903]: E0128 16:03:05.834952 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" podUID="1e436aa5-21ea-4f24-8144-b8800f7286d3" Jan 28 16:03:06 crc kubenswrapper[4903]: E0128 16:03:06.177134 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" podUID="3bd7a3f4-2963-4ac2-8f33-83a667789a33" Jan 28 16:03:06 crc kubenswrapper[4903]: E0128 16:03:06.177383 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" podUID="1e436aa5-21ea-4f24-8144-b8800f7286d3" Jan 28 16:03:09 crc kubenswrapper[4903]: I0128 16:03:09.209878 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" event={"ID":"7ec3d7c1-5943-4992-b2fd-4538131573f6","Type":"ContainerStarted","Data":"fd4dc12d9cc55d1f1802589b50dcbbe26e8201adacdfc93af62fffc61a4f0738"} Jan 28 16:03:09 crc kubenswrapper[4903]: I0128 16:03:09.210816 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" event={"ID":"2f636d64-bf91-43aa-ba24-b7a65cc968e4","Type":"ContainerStarted","Data":"efd6ce425852cfba41074a3ff80d6dfe36d23b9466b68bd0ff4430f384a58881"} Jan 28 16:03:09 crc kubenswrapper[4903]: I0128 16:03:09.307304 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p"] Jan 28 16:03:09 crc kubenswrapper[4903]: W0128 16:03:09.313029 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812b8d8c_d506_46e8_a049_9d3b6d3c05e9.slice/crio-ad08340d278b646882782377f2c55b58505606508a27141c5da44b905b8255bf WatchSource:0}: Error finding container ad08340d278b646882782377f2c55b58505606508a27141c5da44b905b8255bf: Status 404 returned error can't find the container with id ad08340d278b646882782377f2c55b58505606508a27141c5da44b905b8255bf Jan 28 16:03:09 crc kubenswrapper[4903]: I0128 16:03:09.335211 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j"] Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.223020 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" event={"ID":"07b182ea-9e7b-4b3c-9bbc-677a6f61b9af","Type":"ContainerStarted","Data":"9c6f55219993081afdbcaed864b773f68f893969369f4f66e145c4a5835a6f18"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.224118 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.230764 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" event={"ID":"51460248-29df-4549-bb85-decda4cec14b","Type":"ContainerStarted","Data":"4c95e24240b588b75750979da8b5764670906147c2a08fe6e84d35057c5ae260"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.231412 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.244695 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" event={"ID":"13516b4d-d8ad-48a6-8794-305f46b7a2aa","Type":"ContainerStarted","Data":"1989817a58059d31a22dc41cff06181bdd265ce4b4961904afca00d893a0ef1b"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.245278 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.251970 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" podStartSLOduration=6.624502856 podStartE2EDuration="28.251959739s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.202036418 +0000 UTC m=+1036.478007929" lastFinishedPulling="2026-01-28 16:03:05.829493301 +0000 UTC m=+1058.105464812" observedRunningTime="2026-01-28 16:03:10.247514997 +0000 UTC m=+1062.523486508" watchObservedRunningTime="2026-01-28 16:03:10.251959739 +0000 UTC m=+1062.527931250" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.258352 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" event={"ID":"726d95fa-5b8a-4c1b-ae91-54f53b1141a9","Type":"ContainerStarted","Data":"9351ff1a1e632a42567d14516df6d63ebac8b9c42a9317777ee5a8703ead3228"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.258733 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.280139 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" podStartSLOduration=6.872717671 podStartE2EDuration="28.280119427s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:43.912332218 +0000 UTC m=+1036.188303729" lastFinishedPulling="2026-01-28 16:03:05.319733974 +0000 UTC m=+1057.595705485" observedRunningTime="2026-01-28 16:03:10.27581659 +0000 UTC m=+1062.551788101" watchObservedRunningTime="2026-01-28 16:03:10.280119427 +0000 UTC m=+1062.556090928" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.285073 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" event={"ID":"2d8256bb-c9d9-46f1-9fb3-c30fcbfb078e","Type":"ContainerStarted","Data":"133d52502f0b602d7ed2daaca8d26dc689febc345bdd22801cae3d305b5c7c13"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.285687 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.287515 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" event={"ID":"d987a2a1-ec4e-4332-bdd3-8d20e9e35efb","Type":"ContainerStarted","Data":"57daecd5db651093ab2bd5d44b69cd2b69ac8cc93d0902bdab8af0e787640e48"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.287844 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.288449 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" event={"ID":"812b8d8c-d506-46e8-a049-9d3b6d3c05e9","Type":"ContainerStarted","Data":"ad08340d278b646882782377f2c55b58505606508a27141c5da44b905b8255bf"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.296172 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" event={"ID":"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85","Type":"ContainerStarted","Data":"9d5d580825713762caea3fa6e0f212533d81ec57398b9a0c01c0d29e504b3b50"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.297096 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" event={"ID":"7be2e4ab-c0e6-4a70-9aba-d59133aa071f","Type":"ContainerStarted","Data":"3a6e96dc25be899b1732c7cb6de7d0b710c227db3da3cda47d2d46c9feec819e"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.297281 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.304709 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" podStartSLOduration=4.480852333 podStartE2EDuration="28.304694319s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:45.000787885 +0000 UTC m=+1037.276759386" lastFinishedPulling="2026-01-28 16:03:08.824629851 +0000 UTC m=+1061.100601372" observedRunningTime="2026-01-28 16:03:10.30218722 +0000 UTC m=+1062.578158731" watchObservedRunningTime="2026-01-28 16:03:10.304694319 +0000 UTC m=+1062.580665830" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.315810 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" event={"ID":"8b4e8a0c-bd20-4a9e-8b40-2f14d601325f","Type":"ContainerStarted","Data":"c44a9da20c9078fe2f41a5c1e5a7f867b1d05fdb0eff5345aef356a5ab0e099b"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.316454 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.334156 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" event={"ID":"a527edb1-eb6a-4b49-b167-cde14f2dc01f","Type":"ContainerStarted","Data":"9973c6fb477fa7e031e691921342f8f422535fee0cca9ecf01cd08024647f410"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.334733 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.336240 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" podStartSLOduration=3.523568718 podStartE2EDuration="27.336219629s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="2026-01-28 16:02:45.001214356 +0000 UTC m=+1037.277185867" lastFinishedPulling="2026-01-28 16:03:08.813865267 +0000 UTC m=+1061.089836778" observedRunningTime="2026-01-28 16:03:10.331903091 +0000 UTC m=+1062.607874602" watchObservedRunningTime="2026-01-28 16:03:10.336219629 +0000 UTC m=+1062.612191140" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.354495 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" event={"ID":"a18a6add-e3ae-4914-93c7-0ac2ec35b53a","Type":"ContainerStarted","Data":"4461828b1068535d35957f78dade0c319427541b9b192fe778747292da039bb6"} Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.354561 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.354881 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.355190 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.355601 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" podStartSLOduration=6.734397716 podStartE2EDuration="28.355587728s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.208632008 +0000 UTC m=+1036.484603519" lastFinishedPulling="2026-01-28 16:03:05.82982202 +0000 UTC m=+1058.105793531" observedRunningTime="2026-01-28 16:03:10.352826502 +0000 UTC m=+1062.628798013" watchObservedRunningTime="2026-01-28 16:03:10.355587728 +0000 UTC m=+1062.631559239" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.386813 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" podStartSLOduration=7.515649625 podStartE2EDuration="28.380542419s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.964762981 +0000 UTC m=+1037.240734492" lastFinishedPulling="2026-01-28 16:03:05.829655735 +0000 UTC m=+1058.105627286" observedRunningTime="2026-01-28 16:03:10.372346005 +0000 UTC m=+1062.648317516" watchObservedRunningTime="2026-01-28 16:03:10.380542419 +0000 UTC m=+1062.656513930" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.421804 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" podStartSLOduration=7.657240381 podStartE2EDuration="28.421790706s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.554196642 +0000 UTC m=+1036.830168153" lastFinishedPulling="2026-01-28 16:03:05.318746977 +0000 UTC m=+1057.594718478" observedRunningTime="2026-01-28 16:03:10.404471142 +0000 UTC m=+1062.680442653" watchObservedRunningTime="2026-01-28 16:03:10.421790706 +0000 UTC m=+1062.697762217" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.424932 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" podStartSLOduration=3.568539296 podStartE2EDuration="27.424926071s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.999768937 +0000 UTC m=+1037.275740438" lastFinishedPulling="2026-01-28 16:03:08.856155702 +0000 UTC m=+1061.132127213" observedRunningTime="2026-01-28 16:03:10.421873828 +0000 UTC m=+1062.697845339" watchObservedRunningTime="2026-01-28 16:03:10.424926071 +0000 UTC m=+1062.700897582" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.464461 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" podStartSLOduration=7.208271973 podStartE2EDuration="28.4644435s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.573699364 +0000 UTC m=+1036.849670875" lastFinishedPulling="2026-01-28 16:03:05.829870891 +0000 UTC m=+1058.105842402" observedRunningTime="2026-01-28 16:03:10.460253615 +0000 UTC m=+1062.736225126" watchObservedRunningTime="2026-01-28 16:03:10.4644435 +0000 UTC m=+1062.740415011" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.479265 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" podStartSLOduration=7.196909754 podStartE2EDuration="28.479250135s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.54715524 +0000 UTC m=+1036.823126751" lastFinishedPulling="2026-01-28 16:03:05.829495621 +0000 UTC m=+1058.105467132" observedRunningTime="2026-01-28 16:03:10.475889202 +0000 UTC m=+1062.751860713" watchObservedRunningTime="2026-01-28 16:03:10.479250135 +0000 UTC m=+1062.755221646" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.497300 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" podStartSLOduration=3.698439903 podStartE2EDuration="27.497283636s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.999759887 +0000 UTC m=+1037.275731398" lastFinishedPulling="2026-01-28 16:03:08.79860362 +0000 UTC m=+1061.074575131" observedRunningTime="2026-01-28 16:03:10.494411508 +0000 UTC m=+1062.770383019" watchObservedRunningTime="2026-01-28 16:03:10.497283636 +0000 UTC m=+1062.773255147" Jan 28 16:03:10 crc kubenswrapper[4903]: I0128 16:03:10.522280 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" podStartSLOduration=3.661194055 podStartE2EDuration="27.522264388s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="2026-01-28 16:02:45.001034432 +0000 UTC m=+1037.277005943" lastFinishedPulling="2026-01-28 16:03:08.862104755 +0000 UTC m=+1061.138076276" observedRunningTime="2026-01-28 16:03:10.519189694 +0000 UTC m=+1062.795161205" watchObservedRunningTime="2026-01-28 16:03:10.522264388 +0000 UTC m=+1062.798235899" Jan 28 16:03:11 crc kubenswrapper[4903]: I0128 16:03:11.360552 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" event={"ID":"7dee66d6-e59c-4cd4-b730-f31b7f3564b2","Type":"ContainerStarted","Data":"ded47029823a51b46e51913ac4f164db075a1c152059aa10812a3b7d9f381ef9"} Jan 28 16:03:11 crc kubenswrapper[4903]: I0128 16:03:11.361496 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" Jan 28 16:03:11 crc kubenswrapper[4903]: I0128 16:03:11.364111 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" event={"ID":"65141c35-eda1-43fa-ae32-9f86b4bf5315","Type":"ContainerStarted","Data":"cb55a9c7c53304753ddc571157c87514f064c76d7485367f456739d85b80bd78"} Jan 28 16:03:11 crc kubenswrapper[4903]: I0128 16:03:11.375600 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" podStartSLOduration=2.954010777 podStartE2EDuration="29.375583715s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.567368602 +0000 UTC m=+1036.843340113" lastFinishedPulling="2026-01-28 16:03:10.98894154 +0000 UTC m=+1063.264913051" observedRunningTime="2026-01-28 16:03:11.374326901 +0000 UTC m=+1063.650298412" watchObservedRunningTime="2026-01-28 16:03:11.375583715 +0000 UTC m=+1063.651555226" Jan 28 16:03:11 crc kubenswrapper[4903]: I0128 16:03:11.394211 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" podStartSLOduration=2.532835171 podStartE2EDuration="28.394190483s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="2026-01-28 16:02:45.008378472 +0000 UTC m=+1037.284349983" lastFinishedPulling="2026-01-28 16:03:10.869733784 +0000 UTC m=+1063.145705295" observedRunningTime="2026-01-28 16:03:11.3919145 +0000 UTC m=+1063.667886011" watchObservedRunningTime="2026-01-28 16:03:11.394190483 +0000 UTC m=+1063.670161994" Jan 28 16:03:13 crc kubenswrapper[4903]: I0128 16:03:13.375965 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" event={"ID":"812b8d8c-d506-46e8-a049-9d3b6d3c05e9","Type":"ContainerStarted","Data":"5ae6abc863d3b7c6c65ca885d4b148523d8333f5166d1c683d189aaf61b56fbd"} Jan 28 16:03:13 crc kubenswrapper[4903]: I0128 16:03:13.376580 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:03:13 crc kubenswrapper[4903]: I0128 16:03:13.377781 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" event={"ID":"f87bd6ee-e507-4dd8-b987-4d67aa7d5d85","Type":"ContainerStarted","Data":"622eba4c3947768d4e87f0da4213544418cc38659d084776efacc0634521d3b6"} Jan 28 16:03:13 crc kubenswrapper[4903]: I0128 16:03:13.377990 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:03:13 crc kubenswrapper[4903]: I0128 16:03:13.395470 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" podStartSLOduration=28.025773062 podStartE2EDuration="31.395449259s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:03:09.339362023 +0000 UTC m=+1061.615333534" lastFinishedPulling="2026-01-28 16:03:12.70903823 +0000 UTC m=+1064.985009731" observedRunningTime="2026-01-28 16:03:13.394584916 +0000 UTC m=+1065.670556437" watchObservedRunningTime="2026-01-28 16:03:13.395449259 +0000 UTC m=+1065.671420770" Jan 28 16:03:13 crc kubenswrapper[4903]: I0128 16:03:13.424906 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" podStartSLOduration=28.037975727 podStartE2EDuration="31.424883273s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:03:09.339426625 +0000 UTC m=+1061.615398136" lastFinishedPulling="2026-01-28 16:03:12.726334141 +0000 UTC m=+1065.002305682" observedRunningTime="2026-01-28 16:03:13.422599991 +0000 UTC m=+1065.698571502" watchObservedRunningTime="2026-01-28 16:03:13.424883273 +0000 UTC m=+1065.700854784" Jan 28 16:03:13 crc kubenswrapper[4903]: I0128 16:03:13.658306 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" Jan 28 16:03:14 crc kubenswrapper[4903]: I0128 16:03:14.388227 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" event={"ID":"b8e4b217-041d-4097-9ede-0e6ea89353a4","Type":"ContainerStarted","Data":"8394c8d443b356293344b7dbd10e48142e782ce736f98bb7169e6b31b380582e"} Jan 28 16:03:14 crc kubenswrapper[4903]: I0128 16:03:14.409936 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" podStartSLOduration=2.556464535 podStartE2EDuration="31.409889065s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.983488892 +0000 UTC m=+1037.259460403" lastFinishedPulling="2026-01-28 16:03:13.836913422 +0000 UTC m=+1066.112884933" observedRunningTime="2026-01-28 16:03:14.405495115 +0000 UTC m=+1066.681466626" watchObservedRunningTime="2026-01-28 16:03:14.409889065 +0000 UTC m=+1066.685860596" Jan 28 16:03:15 crc kubenswrapper[4903]: I0128 16:03:15.535792 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:03:15 crc kubenswrapper[4903]: I0128 16:03:15.542829 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/55ba9bac-caa2-495d-b933-661303f3c265-webhook-certs\") pod \"openstack-operator-controller-manager-9f67d7-dm5mg\" (UID: \"55ba9bac-caa2-495d-b933-661303f3c265\") " pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:03:15 crc kubenswrapper[4903]: I0128 16:03:15.680359 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:03:16 crc kubenswrapper[4903]: I0128 16:03:16.182479 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg"] Jan 28 16:03:16 crc kubenswrapper[4903]: I0128 16:03:16.402484 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" event={"ID":"55ba9bac-caa2-495d-b933-661303f3c265","Type":"ContainerStarted","Data":"042dc52a71c935123fca1c8cd6b7d9d9bff95df8720f1ca1b222e9e4e0f49fce"} Jan 28 16:03:16 crc kubenswrapper[4903]: I0128 16:03:16.402863 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:03:16 crc kubenswrapper[4903]: I0128 16:03:16.402878 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" event={"ID":"55ba9bac-caa2-495d-b933-661303f3c265","Type":"ContainerStarted","Data":"9720589046a0c29ff62ea7b68cfc90b826c692ec769f722d90ad1a2fc1649829"} Jan 28 16:03:16 crc kubenswrapper[4903]: I0128 16:03:16.415159 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:03:16 crc kubenswrapper[4903]: I0128 16:03:16.460440 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" podStartSLOduration=33.460421876 podStartE2EDuration="33.460421876s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:03:16.436145684 +0000 UTC m=+1068.712117195" watchObservedRunningTime="2026-01-28 16:03:16.460421876 +0000 UTC m=+1068.736393387" Jan 28 16:03:17 crc kubenswrapper[4903]: I0128 16:03:17.410639 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" event={"ID":"fc7c6b24-fa62-48ac-8eca-4b4055313f60","Type":"ContainerStarted","Data":"2d0d89a65bc5942b3e26503feed53de9926de3306bc49a79655570ca609795b0"} Jan 28 16:03:17 crc kubenswrapper[4903]: I0128 16:03:17.411681 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" Jan 28 16:03:17 crc kubenswrapper[4903]: I0128 16:03:17.448663 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" podStartSLOduration=3.531767312 podStartE2EDuration="35.448643276s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.968953726 +0000 UTC m=+1037.244925237" lastFinishedPulling="2026-01-28 16:03:16.8858297 +0000 UTC m=+1069.161801201" observedRunningTime="2026-01-28 16:03:17.443804764 +0000 UTC m=+1069.719776285" watchObservedRunningTime="2026-01-28 16:03:17.448643276 +0000 UTC m=+1069.724614787" Jan 28 16:03:18 crc kubenswrapper[4903]: I0128 16:03:18.425988 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" event={"ID":"3bd7a3f4-2963-4ac2-8f33-83a667789a33","Type":"ContainerStarted","Data":"ef1c7d8be28c6cebc953ea85d1a8b8be82600defb265895804bf229e2247fc9d"} Jan 28 16:03:18 crc kubenswrapper[4903]: I0128 16:03:18.426261 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" Jan 28 16:03:18 crc kubenswrapper[4903]: I0128 16:03:18.845601 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-m2s8p" Jan 28 16:03:18 crc kubenswrapper[4903]: I0128 16:03:18.878755 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" podStartSLOduration=3.9282275650000003 podStartE2EDuration="36.878721169s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.983922734 +0000 UTC m=+1037.259894245" lastFinishedPulling="2026-01-28 16:03:17.934416298 +0000 UTC m=+1070.210387849" observedRunningTime="2026-01-28 16:03:18.44692734 +0000 UTC m=+1070.722898851" watchObservedRunningTime="2026-01-28 16:03:18.878721169 +0000 UTC m=+1071.154692720" Jan 28 16:03:19 crc kubenswrapper[4903]: I0128 16:03:19.207482 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b5d4999dc8727j" Jan 28 16:03:20 crc kubenswrapper[4903]: I0128 16:03:20.438301 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" event={"ID":"1e436aa5-21ea-4f24-8144-b8800f7286d3","Type":"ContainerStarted","Data":"eec621b9e68e204c0b45c2d32d0056d707175f03d4727c38efc8985c8ad272de"} Jan 28 16:03:20 crc kubenswrapper[4903]: I0128 16:03:20.438994 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" Jan 28 16:03:20 crc kubenswrapper[4903]: I0128 16:03:20.442340 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" event={"ID":"f444752b-b039-47d4-b969-c8f8bcdcc4df","Type":"ContainerStarted","Data":"d362440aa116cb0e45408b533b7e23a7dd8f727c6e2ebe42d4f6dde1c4cdf9af"} Jan 28 16:03:20 crc kubenswrapper[4903]: I0128 16:03:20.442608 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" Jan 28 16:03:20 crc kubenswrapper[4903]: I0128 16:03:20.458798 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" podStartSLOduration=3.638044461 podStartE2EDuration="38.458779786s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.983495262 +0000 UTC m=+1037.259466773" lastFinishedPulling="2026-01-28 16:03:19.804230567 +0000 UTC m=+1072.080202098" observedRunningTime="2026-01-28 16:03:20.454002926 +0000 UTC m=+1072.729974437" watchObservedRunningTime="2026-01-28 16:03:20.458779786 +0000 UTC m=+1072.734751317" Jan 28 16:03:20 crc kubenswrapper[4903]: I0128 16:03:20.472689 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" podStartSLOduration=3.551418288 podStartE2EDuration="38.472669935s" podCreationTimestamp="2026-01-28 16:02:42 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.987505112 +0000 UTC m=+1037.263476623" lastFinishedPulling="2026-01-28 16:03:19.908756759 +0000 UTC m=+1072.184728270" observedRunningTime="2026-01-28 16:03:20.469757326 +0000 UTC m=+1072.745728837" watchObservedRunningTime="2026-01-28 16:03:20.472669935 +0000 UTC m=+1072.748641446" Jan 28 16:03:21 crc kubenswrapper[4903]: I0128 16:03:21.450623 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" event={"ID":"963c0e0a-fd5c-4156-ae48-02c4573137f1","Type":"ContainerStarted","Data":"b2736136ec764ebefebbbf1cd38a0d1c7ab93de81c8932dc60663f5dd5d2a59b"} Jan 28 16:03:21 crc kubenswrapper[4903]: I0128 16:03:21.464704 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rf2vw" podStartSLOduration=2.555701303 podStartE2EDuration="38.464663597s" podCreationTimestamp="2026-01-28 16:02:43 +0000 UTC" firstStartedPulling="2026-01-28 16:02:44.990651957 +0000 UTC m=+1037.266623468" lastFinishedPulling="2026-01-28 16:03:20.899614251 +0000 UTC m=+1073.175585762" observedRunningTime="2026-01-28 16:03:21.46290841 +0000 UTC m=+1073.738879931" watchObservedRunningTime="2026-01-28 16:03:21.464663597 +0000 UTC m=+1073.740635108" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.020276 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-m8d75" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.035216 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-99ppz" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.052185 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-7vnbg" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.084795 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-2ks9s" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.106182 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-62q56" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.163648 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wnkln" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.273552 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-xpxb9" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.446875 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-c64gq" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.450175 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bf2vp" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.472684 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-pp8td" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.565900 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-874fz" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.567993 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-9g75t" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.659733 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktd5l" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.687440 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-shc7w" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.700873 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-x74hn" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.779882 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-6qzt6" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.804665 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" Jan 28 16:03:23 crc kubenswrapper[4903]: I0128 16:03:23.811349 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-78xkf" Jan 28 16:03:25 crc kubenswrapper[4903]: I0128 16:03:25.689422 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-9f67d7-dm5mg" Jan 28 16:03:26 crc kubenswrapper[4903]: I0128 16:03:26.613559 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:03:26 crc kubenswrapper[4903]: I0128 16:03:26.613617 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:03:33 crc kubenswrapper[4903]: I0128 16:03:33.374704 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-2rvjx" Jan 28 16:03:33 crc kubenswrapper[4903]: I0128 16:03:33.498418 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-lxn64" Jan 28 16:03:47 crc kubenswrapper[4903]: I0128 16:03:47.932279 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-h7xmr"] Jan 28 16:03:47 crc kubenswrapper[4903]: I0128 16:03:47.934186 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:47 crc kubenswrapper[4903]: I0128 16:03:47.937071 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 16:03:47 crc kubenswrapper[4903]: I0128 16:03:47.937359 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-96tw7" Jan 28 16:03:47 crc kubenswrapper[4903]: I0128 16:03:47.937570 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 16:03:47 crc kubenswrapper[4903]: I0128 16:03:47.937736 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 16:03:47 crc kubenswrapper[4903]: I0128 16:03:47.947146 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-h7xmr"] Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.006570 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8695d56c-3f7a-4627-9353-21d5604c3541-config\") pod \"dnsmasq-dns-84bb9d8bd9-h7xmr\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.006647 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnmn\" (UniqueName: \"kubernetes.io/projected/8695d56c-3f7a-4627-9353-21d5604c3541-kube-api-access-wrnmn\") pod \"dnsmasq-dns-84bb9d8bd9-h7xmr\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.024224 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wjp4x"] Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.025490 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.027014 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.031583 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wjp4x"] Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.108131 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8695d56c-3f7a-4627-9353-21d5604c3541-config\") pod \"dnsmasq-dns-84bb9d8bd9-h7xmr\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.108214 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-dns-svc\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.108255 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvf9m\" (UniqueName: \"kubernetes.io/projected/6fd6c635-e949-46f5-b6b4-c28832f7d69b-kube-api-access-hvf9m\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.108303 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrnmn\" (UniqueName: \"kubernetes.io/projected/8695d56c-3f7a-4627-9353-21d5604c3541-kube-api-access-wrnmn\") pod \"dnsmasq-dns-84bb9d8bd9-h7xmr\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.108372 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-config\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.109748 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8695d56c-3f7a-4627-9353-21d5604c3541-config\") pod \"dnsmasq-dns-84bb9d8bd9-h7xmr\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.129097 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrnmn\" (UniqueName: \"kubernetes.io/projected/8695d56c-3f7a-4627-9353-21d5604c3541-kube-api-access-wrnmn\") pod \"dnsmasq-dns-84bb9d8bd9-h7xmr\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.209294 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-config\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.209729 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-dns-svc\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.209868 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvf9m\" (UniqueName: \"kubernetes.io/projected/6fd6c635-e949-46f5-b6b4-c28832f7d69b-kube-api-access-hvf9m\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.210516 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-config\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.210793 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-dns-svc\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.234886 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvf9m\" (UniqueName: \"kubernetes.io/projected/6fd6c635-e949-46f5-b6b4-c28832f7d69b-kube-api-access-hvf9m\") pod \"dnsmasq-dns-5f854695bc-wjp4x\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.256380 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.338063 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.773404 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-h7xmr"] Jan 28 16:03:48 crc kubenswrapper[4903]: I0128 16:03:48.786036 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wjp4x"] Jan 28 16:03:48 crc kubenswrapper[4903]: W0128 16:03:48.797763 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd6c635_e949_46f5_b6b4_c28832f7d69b.slice/crio-ae03f811ba347501a835adb3bd58291a2c94c28543dc61d8c918a1204318fa43 WatchSource:0}: Error finding container ae03f811ba347501a835adb3bd58291a2c94c28543dc61d8c918a1204318fa43: Status 404 returned error can't find the container with id ae03f811ba347501a835adb3bd58291a2c94c28543dc61d8c918a1204318fa43 Jan 28 16:03:49 crc kubenswrapper[4903]: I0128 16:03:49.660848 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" event={"ID":"8695d56c-3f7a-4627-9353-21d5604c3541","Type":"ContainerStarted","Data":"079788b95c8508270b15e7338bd39fac9cfe5fde8a5b2206ab150b9a2308fec7"} Jan 28 16:03:49 crc kubenswrapper[4903]: I0128 16:03:49.661937 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" event={"ID":"6fd6c635-e949-46f5-b6b4-c28832f7d69b","Type":"ContainerStarted","Data":"ae03f811ba347501a835adb3bd58291a2c94c28543dc61d8c918a1204318fa43"} Jan 28 16:03:50 crc kubenswrapper[4903]: I0128 16:03:50.784483 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wjp4x"] Jan 28 16:03:50 crc kubenswrapper[4903]: I0128 16:03:50.810380 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-jd9zd"] Jan 28 16:03:50 crc kubenswrapper[4903]: I0128 16:03:50.811574 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:50 crc kubenswrapper[4903]: I0128 16:03:50.827605 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-jd9zd"] Jan 28 16:03:50 crc kubenswrapper[4903]: I0128 16:03:50.949864 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:50 crc kubenswrapper[4903]: I0128 16:03:50.949909 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tht9\" (UniqueName: \"kubernetes.io/projected/69c99f72-85ce-4565-be21-569dee03cfdb-kube-api-access-7tht9\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:50 crc kubenswrapper[4903]: I0128 16:03:50.950100 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-config\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.051466 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.051512 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tht9\" (UniqueName: \"kubernetes.io/projected/69c99f72-85ce-4565-be21-569dee03cfdb-kube-api-access-7tht9\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.051600 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-config\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.052462 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.062312 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-config\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.083633 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tht9\" (UniqueName: \"kubernetes.io/projected/69c99f72-85ce-4565-be21-569dee03cfdb-kube-api-access-7tht9\") pod \"dnsmasq-dns-c7cbb8f79-jd9zd\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.141818 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.144805 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-h7xmr"] Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.176011 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-j7qqp"] Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.177311 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.199223 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-j7qqp"] Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.359274 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-config\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.359331 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-dns-svc\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.359387 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dz8g\" (UniqueName: \"kubernetes.io/projected/6729e676-e326-4dea-8632-01d8525ddd0a-kube-api-access-6dz8g\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.460383 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dz8g\" (UniqueName: \"kubernetes.io/projected/6729e676-e326-4dea-8632-01d8525ddd0a-kube-api-access-6dz8g\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.460750 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-config\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.460773 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-dns-svc\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.461559 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-dns-svc\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.461576 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-config\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.475239 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dz8g\" (UniqueName: \"kubernetes.io/projected/6729e676-e326-4dea-8632-01d8525ddd0a-kube-api-access-6dz8g\") pod \"dnsmasq-dns-95f5f6995-j7qqp\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.601755 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.758902 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-jd9zd"] Jan 28 16:03:51 crc kubenswrapper[4903]: W0128 16:03:51.766103 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69c99f72_85ce_4565_be21_569dee03cfdb.slice/crio-814df122720962f0805392c2a706b83b837fce56fabd7865e4655aa340add5c8 WatchSource:0}: Error finding container 814df122720962f0805392c2a706b83b837fce56fabd7865e4655aa340add5c8: Status 404 returned error can't find the container with id 814df122720962f0805392c2a706b83b837fce56fabd7865e4655aa340add5c8 Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.955048 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.956843 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.958498 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.959111 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.959412 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.960277 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.960364 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.960490 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-s54s8" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.961957 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 16:03:51 crc kubenswrapper[4903]: I0128 16:03:51.983696 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.037819 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-j7qqp"] Jan 28 16:03:52 crc kubenswrapper[4903]: W0128 16:03:52.044577 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6729e676_e326_4dea_8632_01d8525ddd0a.slice/crio-82c2c7b7e94e9e2231fa3fb835bb1470df4adc9ee12fc818c8f94a22b8ddb650 WatchSource:0}: Error finding container 82c2c7b7e94e9e2231fa3fb835bb1470df4adc9ee12fc818c8f94a22b8ddb650: Status 404 returned error can't find the container with id 82c2c7b7e94e9e2231fa3fb835bb1470df4adc9ee12fc818c8f94a22b8ddb650 Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068228 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068286 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cee6442c-f9ef-4902-b6ec-2bc01a904849-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068380 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068405 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068437 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xkr\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-kube-api-access-k5xkr\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068536 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068636 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068665 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068749 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068798 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.068830 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cee6442c-f9ef-4902-b6ec-2bc01a904849-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.170664 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.170714 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.170747 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xkr\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-kube-api-access-k5xkr\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171077 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171115 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171132 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171167 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171194 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171210 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cee6442c-f9ef-4902-b6ec-2bc01a904849-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171237 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171257 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cee6442c-f9ef-4902-b6ec-2bc01a904849-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171490 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.171797 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.173631 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.174401 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.177233 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.177395 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.183655 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.183696 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.184046 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cee6442c-f9ef-4902-b6ec-2bc01a904849-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.186628 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cee6442c-f9ef-4902-b6ec-2bc01a904849-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.190005 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xkr\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-kube-api-access-k5xkr\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.223749 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.292953 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.352962 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.354940 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.358018 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.358207 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-fs8tl" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.364648 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.364918 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.365020 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.365164 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.365275 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.371641 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476215 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476588 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvtbj\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-kube-api-access-bvtbj\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476624 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476645 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb51034c-4387-4aba-8eff-6ff960538da9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476685 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476714 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476838 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476901 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476924 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.476951 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb51034c-4387-4aba-8eff-6ff960538da9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.477090 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578191 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578236 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb51034c-4387-4aba-8eff-6ff960538da9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578277 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578301 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578350 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578374 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578389 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578407 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb51034c-4387-4aba-8eff-6ff960538da9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578432 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578452 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.578472 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvtbj\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-kube-api-access-bvtbj\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.579169 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.579830 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.579881 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.583168 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.584831 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb51034c-4387-4aba-8eff-6ff960538da9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.585656 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.586338 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.588404 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.590596 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb51034c-4387-4aba-8eff-6ff960538da9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.591871 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.596635 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvtbj\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-kube-api-access-bvtbj\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.608657 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.691241 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" event={"ID":"6729e676-e326-4dea-8632-01d8525ddd0a","Type":"ContainerStarted","Data":"82c2c7b7e94e9e2231fa3fb835bb1470df4adc9ee12fc818c8f94a22b8ddb650"} Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.694717 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" event={"ID":"69c99f72-85ce-4565-be21-569dee03cfdb","Type":"ContainerStarted","Data":"814df122720962f0805392c2a706b83b837fce56fabd7865e4655aa340add5c8"} Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.715915 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.791632 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 16:03:52 crc kubenswrapper[4903]: W0128 16:03:52.795367 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcee6442c_f9ef_4902_b6ec_2bc01a904849.slice/crio-9d9c5e642889ac0dd416fa9ad89a59a78b150882066355bc40b4a0a11b767a28 WatchSource:0}: Error finding container 9d9c5e642889ac0dd416fa9ad89a59a78b150882066355bc40b4a0a11b767a28: Status 404 returned error can't find the container with id 9d9c5e642889ac0dd416fa9ad89a59a78b150882066355bc40b4a0a11b767a28 Jan 28 16:03:52 crc kubenswrapper[4903]: I0128 16:03:52.957687 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 16:03:52 crc kubenswrapper[4903]: W0128 16:03:52.969427 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb51034c_4387_4aba_8eff_6ff960538da9.slice/crio-2a645a2906ccbba2a909cee4ad281eb556682d4461f3589c473b077e0bbb5072 WatchSource:0}: Error finding container 2a645a2906ccbba2a909cee4ad281eb556682d4461f3589c473b077e0bbb5072: Status 404 returned error can't find the container with id 2a645a2906ccbba2a909cee4ad281eb556682d4461f3589c473b077e0bbb5072 Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.437773 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.439290 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.449355 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.449961 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.449373 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.450441 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-kfqkz" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.450364 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.456233 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496140 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496184 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496240 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496260 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496280 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-default\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496295 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kolla-config\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496316 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7rn\" (UniqueName: \"kubernetes.io/projected/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kube-api-access-4r7rn\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.496384 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598614 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598704 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598725 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598757 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598778 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598795 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kolla-config\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598808 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-default\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.598829 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r7rn\" (UniqueName: \"kubernetes.io/projected/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kube-api-access-4r7rn\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.599570 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.600101 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.601976 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-default\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.602197 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kolla-config\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.602788 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.621372 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.622027 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.634216 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r7rn\" (UniqueName: \"kubernetes.io/projected/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kube-api-access-4r7rn\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.662418 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " pod="openstack/openstack-galera-0" Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.712687 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb51034c-4387-4aba-8eff-6ff960538da9","Type":"ContainerStarted","Data":"2a645a2906ccbba2a909cee4ad281eb556682d4461f3589c473b077e0bbb5072"} Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.715148 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cee6442c-f9ef-4902-b6ec-2bc01a904849","Type":"ContainerStarted","Data":"9d9c5e642889ac0dd416fa9ad89a59a78b150882066355bc40b4a0a11b767a28"} Jan 28 16:03:53 crc kubenswrapper[4903]: I0128 16:03:53.793383 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.834023 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.837418 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.840385 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.840541 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.840390 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.840955 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tnv27" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.847584 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934061 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934161 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rqn\" (UniqueName: \"kubernetes.io/projected/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kube-api-access-h4rqn\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934199 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934224 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934291 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934324 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934410 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:54 crc kubenswrapper[4903]: I0128 16:03:54.934490 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036514 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rqn\" (UniqueName: \"kubernetes.io/projected/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kube-api-access-h4rqn\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036594 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036625 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036648 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036704 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036731 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036762 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.036816 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.037253 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.038253 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.039287 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.040978 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.041562 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.046777 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.058928 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.059655 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rqn\" (UniqueName: \"kubernetes.io/projected/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kube-api-access-h4rqn\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.075024 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.162974 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.191109 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.193033 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.197691 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.198114 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hzb56" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.198177 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.209697 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.239271 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.241378 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.241433 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvrg\" (UniqueName: \"kubernetes.io/projected/bac3a1bb-718a-42b1-9c87-71258a05b083-kube-api-access-jtvrg\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.241461 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-config-data\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.241502 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-kolla-config\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.343437 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.343487 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtvrg\" (UniqueName: \"kubernetes.io/projected/bac3a1bb-718a-42b1-9c87-71258a05b083-kube-api-access-jtvrg\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.343509 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-config-data\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.343564 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-kolla-config\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.343646 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.345377 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-kolla-config\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.345851 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-config-data\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.347377 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.348723 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.365066 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtvrg\" (UniqueName: \"kubernetes.io/projected/bac3a1bb-718a-42b1-9c87-71258a05b083-kube-api-access-jtvrg\") pod \"memcached-0\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " pod="openstack/memcached-0" Jan 28 16:03:55 crc kubenswrapper[4903]: I0128 16:03:55.543980 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 16:03:56 crc kubenswrapper[4903]: I0128 16:03:56.613923 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:03:56 crc kubenswrapper[4903]: I0128 16:03:56.613984 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.027056 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.028484 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.037297 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sl9ql" Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.039916 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.080120 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdkj\" (UniqueName: \"kubernetes.io/projected/30b00809-4c91-4c35-b54a-46b5092fdc87-kube-api-access-npdkj\") pod \"kube-state-metrics-0\" (UID: \"30b00809-4c91-4c35-b54a-46b5092fdc87\") " pod="openstack/kube-state-metrics-0" Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.182006 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdkj\" (UniqueName: \"kubernetes.io/projected/30b00809-4c91-4c35-b54a-46b5092fdc87-kube-api-access-npdkj\") pod \"kube-state-metrics-0\" (UID: \"30b00809-4c91-4c35-b54a-46b5092fdc87\") " pod="openstack/kube-state-metrics-0" Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.208674 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdkj\" (UniqueName: \"kubernetes.io/projected/30b00809-4c91-4c35-b54a-46b5092fdc87-kube-api-access-npdkj\") pod \"kube-state-metrics-0\" (UID: \"30b00809-4c91-4c35-b54a-46b5092fdc87\") " pod="openstack/kube-state-metrics-0" Jan 28 16:03:57 crc kubenswrapper[4903]: I0128 16:03:57.410627 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:04:00 crc kubenswrapper[4903]: I0128 16:04:00.998307 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g8tcr"] Jan 28 16:04:00 crc kubenswrapper[4903]: I0128 16:04:00.999747 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.003236 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.007483 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pvzp9" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.008412 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.017091 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-sdvpf"] Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.034262 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.042812 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-ovn-controller-tls-certs\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.042865 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run-ovn\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.042896 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-combined-ca-bundle\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.042945 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a30cd9-7e56-4a30-8b2d-7786c742c248-scripts\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.042969 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gtm\" (UniqueName: \"kubernetes.io/projected/33a30cd9-7e56-4a30-8b2d-7786c742c248-kube-api-access-26gtm\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.042989 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.043004 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-log-ovn\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.053088 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g8tcr"] Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.084857 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sdvpf"] Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.144862 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-ovn-controller-tls-certs\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.144955 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-run\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.144985 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6sr\" (UniqueName: \"kubernetes.io/projected/87970b20-51e0-4e11-875a-8dea3b633ac5-kube-api-access-2m6sr\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145033 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run-ovn\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145055 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-log\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145115 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-lib\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145153 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-combined-ca-bundle\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145198 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a30cd9-7e56-4a30-8b2d-7786c742c248-scripts\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26gtm\" (UniqueName: \"kubernetes.io/projected/33a30cd9-7e56-4a30-8b2d-7786c742c248-kube-api-access-26gtm\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145230 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145247 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-log-ovn\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145273 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87970b20-51e0-4e11-875a-8dea3b633ac5-scripts\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145302 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-etc-ovs\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145704 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run-ovn\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.145788 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.146029 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-log-ovn\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.148209 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a30cd9-7e56-4a30-8b2d-7786c742c248-scripts\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.153829 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-combined-ca-bundle\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.168284 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26gtm\" (UniqueName: \"kubernetes.io/projected/33a30cd9-7e56-4a30-8b2d-7786c742c248-kube-api-access-26gtm\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.170446 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-ovn-controller-tls-certs\") pod \"ovn-controller-g8tcr\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.246475 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87970b20-51e0-4e11-875a-8dea3b633ac5-scripts\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.246569 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-etc-ovs\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.246663 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-run\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.246692 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6sr\" (UniqueName: \"kubernetes.io/projected/87970b20-51e0-4e11-875a-8dea3b633ac5-kube-api-access-2m6sr\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.247001 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-log\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.247153 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-lib\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.247469 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-log\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.247506 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-run\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.247584 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-etc-ovs\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.247615 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-lib\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.248919 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87970b20-51e0-4e11-875a-8dea3b633ac5-scripts\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.263495 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6sr\" (UniqueName: \"kubernetes.io/projected/87970b20-51e0-4e11-875a-8dea3b633ac5-kube-api-access-2m6sr\") pod \"ovn-controller-ovs-sdvpf\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.321755 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:01 crc kubenswrapper[4903]: I0128 16:04:01.363291 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.405937 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.407365 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.410625 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-xgjwp" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.410922 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.411043 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.411142 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.411424 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.420461 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480359 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-config\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480758 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480788 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480841 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480891 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gf8\" (UniqueName: \"kubernetes.io/projected/0e9123e0-08c8-4892-8378-4f99799d7dfc-kube-api-access-69gf8\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480919 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480940 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.480973 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.582871 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69gf8\" (UniqueName: \"kubernetes.io/projected/0e9123e0-08c8-4892-8378-4f99799d7dfc-kube-api-access-69gf8\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.582936 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.582965 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.583010 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.583048 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-config\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.583096 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.583127 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.583175 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.584626 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.585158 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.588422 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-config\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.589249 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.597831 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.601608 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.611280 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.614129 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.629362 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.634954 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-snbmm" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.635174 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.635851 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.635969 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.665829 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.672477 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.673388 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69gf8\" (UniqueName: \"kubernetes.io/projected/0e9123e0-08c8-4892-8378-4f99799d7dfc-kube-api-access-69gf8\") pod \"ovsdbserver-nb-0\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687348 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p7hl\" (UniqueName: \"kubernetes.io/projected/83fe52fb-0760-4173-9567-11d84b522c71-kube-api-access-9p7hl\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687448 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-config\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687476 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83fe52fb-0760-4173-9567-11d84b522c71-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687506 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687641 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687803 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687833 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.687919 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.742723 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789759 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789802 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789848 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789870 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p7hl\" (UniqueName: \"kubernetes.io/projected/83fe52fb-0760-4173-9567-11d84b522c71-kube-api-access-9p7hl\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789901 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-config\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789920 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83fe52fb-0760-4173-9567-11d84b522c71-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789940 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.789963 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.790145 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.791754 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.794215 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83fe52fb-0760-4173-9567-11d84b522c71-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.799581 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.800922 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-config\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.805763 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.828684 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.829079 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p7hl\" (UniqueName: \"kubernetes.io/projected/83fe52fb-0760-4173-9567-11d84b522c71-kube-api-access-9p7hl\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:03 crc kubenswrapper[4903]: I0128 16:04:03.841349 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:04 crc kubenswrapper[4903]: I0128 16:04:04.044461 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:09 crc kubenswrapper[4903]: I0128 16:04:09.422052 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 16:04:09 crc kubenswrapper[4903]: W0128 16:04:09.836337 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac3a1bb_718a_42b1_9c87_71258a05b083.slice/crio-d8b02200375a1f021216a2ca1dbb1b01ec854046bbe9d9b112d4f98d4c7a9d0b WatchSource:0}: Error finding container d8b02200375a1f021216a2ca1dbb1b01ec854046bbe9d9b112d4f98d4c7a9d0b: Status 404 returned error can't find the container with id d8b02200375a1f021216a2ca1dbb1b01ec854046bbe9d9b112d4f98d4c7a9d0b Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.848239 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.848733 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvf9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-wjp4x_openstack(6fd6c635-e949-46f5-b6b4-c28832f7d69b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.850275 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" podUID="6fd6c635-e949-46f5-b6b4-c28832f7d69b" Jan 28 16:04:09 crc kubenswrapper[4903]: I0128 16:04:09.867944 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bac3a1bb-718a-42b1-9c87-71258a05b083","Type":"ContainerStarted","Data":"d8b02200375a1f021216a2ca1dbb1b01ec854046bbe9d9b112d4f98d4c7a9d0b"} Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.873516 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.873708 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tht9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-jd9zd_openstack(69c99f72-85ce-4565-be21-569dee03cfdb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.874889 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.885072 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.885238 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dz8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-j7qqp_openstack(6729e676-e326-4dea-8632-01d8525ddd0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.886396 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.917307 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.917497 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrnmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-h7xmr_openstack(8695d56c-3f7a-4627-9353-21d5604c3541): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:04:09 crc kubenswrapper[4903]: E0128 16:04:09.919511 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" podUID="8695d56c-3f7a-4627-9353-21d5604c3541" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.315439 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:04:10 crc kubenswrapper[4903]: W0128 16:04:10.322255 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30b00809_4c91_4c35_b54a_46b5092fdc87.slice/crio-bafea61a5848b168e4d412bcc8fc7ba3bcb0abb2e92be996f368e21eb608b82f WatchSource:0}: Error finding container bafea61a5848b168e4d412bcc8fc7ba3bcb0abb2e92be996f368e21eb608b82f: Status 404 returned error can't find the container with id bafea61a5848b168e4d412bcc8fc7ba3bcb0abb2e92be996f368e21eb608b82f Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.359805 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.472492 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 16:04:10 crc kubenswrapper[4903]: W0128 16:04:10.494412 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d45d584_dc21_48a4_842d_ab47fcfdd63d.slice/crio-563f678091ae0bfdd59f87ef2dda599d56aa391658b04e5c0448e51b282c611f WatchSource:0}: Error finding container 563f678091ae0bfdd59f87ef2dda599d56aa391658b04e5c0448e51b282c611f: Status 404 returned error can't find the container with id 563f678091ae0bfdd59f87ef2dda599d56aa391658b04e5c0448e51b282c611f Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.504685 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-dns-svc\") pod \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.504764 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-config\") pod \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.504786 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvf9m\" (UniqueName: \"kubernetes.io/projected/6fd6c635-e949-46f5-b6b4-c28832f7d69b-kube-api-access-hvf9m\") pod \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\" (UID: \"6fd6c635-e949-46f5-b6b4-c28832f7d69b\") " Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.506333 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fd6c635-e949-46f5-b6b4-c28832f7d69b" (UID: "6fd6c635-e949-46f5-b6b4-c28832f7d69b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.506721 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-config" (OuterVolumeSpecName: "config") pod "6fd6c635-e949-46f5-b6b4-c28832f7d69b" (UID: "6fd6c635-e949-46f5-b6b4-c28832f7d69b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.516431 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd6c635-e949-46f5-b6b4-c28832f7d69b-kube-api-access-hvf9m" (OuterVolumeSpecName: "kube-api-access-hvf9m") pod "6fd6c635-e949-46f5-b6b4-c28832f7d69b" (UID: "6fd6c635-e949-46f5-b6b4-c28832f7d69b"). InnerVolumeSpecName "kube-api-access-hvf9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.541269 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g8tcr"] Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.551091 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.606642 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.606665 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvf9m\" (UniqueName: \"kubernetes.io/projected/6fd6c635-e949-46f5-b6b4-c28832f7d69b-kube-api-access-hvf9m\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.606675 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fd6c635-e949-46f5-b6b4-c28832f7d69b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.658808 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 16:04:10 crc kubenswrapper[4903]: W0128 16:04:10.661014 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83fe52fb_0760_4173_9567_11d84b522c71.slice/crio-00cec65ead8c82961cd9c1c98242f4731fe55cebb4d82482780f47599df2c142 WatchSource:0}: Error finding container 00cec65ead8c82961cd9c1c98242f4731fe55cebb4d82482780f47599df2c142: Status 404 returned error can't find the container with id 00cec65ead8c82961cd9c1c98242f4731fe55cebb4d82482780f47599df2c142 Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.876796 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.876814 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-wjp4x" event={"ID":"6fd6c635-e949-46f5-b6b4-c28832f7d69b","Type":"ContainerDied","Data":"ae03f811ba347501a835adb3bd58291a2c94c28543dc61d8c918a1204318fa43"} Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.877890 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g8tcr" event={"ID":"33a30cd9-7e56-4a30-8b2d-7786c742c248","Type":"ContainerStarted","Data":"055cb5057de75ce2a7424b7bf377259c82047ce11a931e2a8586cc144da7b543"} Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.879226 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30b00809-4c91-4c35-b54a-46b5092fdc87","Type":"ContainerStarted","Data":"bafea61a5848b168e4d412bcc8fc7ba3bcb0abb2e92be996f368e21eb608b82f"} Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.882746 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"83fe52fb-0760-4173-9567-11d84b522c71","Type":"ContainerStarted","Data":"00cec65ead8c82961cd9c1c98242f4731fe55cebb4d82482780f47599df2c142"} Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.885440 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9d45d584-dc21-48a4-842d-ab47fcfdd63d","Type":"ContainerStarted","Data":"563f678091ae0bfdd59f87ef2dda599d56aa391658b04e5c0448e51b282c611f"} Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.886561 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1423eabe-b2af-4a42-a38e-d5c1c53e7845","Type":"ContainerStarted","Data":"284fa6bde6207697796351bd6359745370a4c4c885896c026ef246c2e04bb7b7"} Jan 28 16:04:10 crc kubenswrapper[4903]: E0128 16:04:10.888321 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" Jan 28 16:04:10 crc kubenswrapper[4903]: E0128 16:04:10.888338 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.966638 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wjp4x"] Jan 28 16:04:10 crc kubenswrapper[4903]: I0128 16:04:10.975036 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-wjp4x"] Jan 28 16:04:11 crc kubenswrapper[4903]: I0128 16:04:11.629303 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 16:04:11 crc kubenswrapper[4903]: I0128 16:04:11.742673 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sdvpf"] Jan 28 16:04:11 crc kubenswrapper[4903]: I0128 16:04:11.896495 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb51034c-4387-4aba-8eff-6ff960538da9","Type":"ContainerStarted","Data":"35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2"} Jan 28 16:04:11 crc kubenswrapper[4903]: I0128 16:04:11.898077 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cee6442c-f9ef-4902-b6ec-2bc01a904849","Type":"ContainerStarted","Data":"03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10"} Jan 28 16:04:12 crc kubenswrapper[4903]: W0128 16:04:12.235127 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e9123e0_08c8_4892_8378_4f99799d7dfc.slice/crio-241a14bfcffaec67bfbc29bf999917853c1f332e6731e9181a7583490b0918fd WatchSource:0}: Error finding container 241a14bfcffaec67bfbc29bf999917853c1f332e6731e9181a7583490b0918fd: Status 404 returned error can't find the container with id 241a14bfcffaec67bfbc29bf999917853c1f332e6731e9181a7583490b0918fd Jan 28 16:04:12 crc kubenswrapper[4903]: W0128 16:04:12.237620 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87970b20_51e0_4e11_875a_8dea3b633ac5.slice/crio-539b3793e3b662d216d7ba3d666e6bedd07dbff5f41e72bbfd01466f59cc881e WatchSource:0}: Error finding container 539b3793e3b662d216d7ba3d666e6bedd07dbff5f41e72bbfd01466f59cc881e: Status 404 returned error can't find the container with id 539b3793e3b662d216d7ba3d666e6bedd07dbff5f41e72bbfd01466f59cc881e Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.300759 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.426035 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd6c635-e949-46f5-b6b4-c28832f7d69b" path="/var/lib/kubelet/pods/6fd6c635-e949-46f5-b6b4-c28832f7d69b/volumes" Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.436349 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrnmn\" (UniqueName: \"kubernetes.io/projected/8695d56c-3f7a-4627-9353-21d5604c3541-kube-api-access-wrnmn\") pod \"8695d56c-3f7a-4627-9353-21d5604c3541\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.436469 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8695d56c-3f7a-4627-9353-21d5604c3541-config\") pod \"8695d56c-3f7a-4627-9353-21d5604c3541\" (UID: \"8695d56c-3f7a-4627-9353-21d5604c3541\") " Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.437358 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8695d56c-3f7a-4627-9353-21d5604c3541-config" (OuterVolumeSpecName: "config") pod "8695d56c-3f7a-4627-9353-21d5604c3541" (UID: "8695d56c-3f7a-4627-9353-21d5604c3541"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.442594 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8695d56c-3f7a-4627-9353-21d5604c3541-kube-api-access-wrnmn" (OuterVolumeSpecName: "kube-api-access-wrnmn") pod "8695d56c-3f7a-4627-9353-21d5604c3541" (UID: "8695d56c-3f7a-4627-9353-21d5604c3541"). InnerVolumeSpecName "kube-api-access-wrnmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.539116 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrnmn\" (UniqueName: \"kubernetes.io/projected/8695d56c-3f7a-4627-9353-21d5604c3541-kube-api-access-wrnmn\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.539482 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8695d56c-3f7a-4627-9353-21d5604c3541-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.906901 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sdvpf" event={"ID":"87970b20-51e0-4e11-875a-8dea3b633ac5","Type":"ContainerStarted","Data":"539b3793e3b662d216d7ba3d666e6bedd07dbff5f41e72bbfd01466f59cc881e"} Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.908436 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" event={"ID":"8695d56c-3f7a-4627-9353-21d5604c3541","Type":"ContainerDied","Data":"079788b95c8508270b15e7338bd39fac9cfe5fde8a5b2206ab150b9a2308fec7"} Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.908551 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-h7xmr" Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.912398 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e9123e0-08c8-4892-8378-4f99799d7dfc","Type":"ContainerStarted","Data":"241a14bfcffaec67bfbc29bf999917853c1f332e6731e9181a7583490b0918fd"} Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.961115 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-h7xmr"] Jan 28 16:04:12 crc kubenswrapper[4903]: I0128 16:04:12.969623 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-h7xmr"] Jan 28 16:04:14 crc kubenswrapper[4903]: I0128 16:04:14.434732 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8695d56c-3f7a-4627-9353-21d5604c3541" path="/var/lib/kubelet/pods/8695d56c-3f7a-4627-9353-21d5604c3541/volumes" Jan 28 16:04:14 crc kubenswrapper[4903]: I0128 16:04:14.927621 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bac3a1bb-718a-42b1-9c87-71258a05b083","Type":"ContainerStarted","Data":"22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7"} Jan 28 16:04:14 crc kubenswrapper[4903]: I0128 16:04:14.928971 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 16:04:14 crc kubenswrapper[4903]: I0128 16:04:14.948431 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.809213677 podStartE2EDuration="19.9483698s" podCreationTimestamp="2026-01-28 16:03:55 +0000 UTC" firstStartedPulling="2026-01-28 16:04:09.844661167 +0000 UTC m=+1122.120632678" lastFinishedPulling="2026-01-28 16:04:13.98381726 +0000 UTC m=+1126.259788801" observedRunningTime="2026-01-28 16:04:14.94396564 +0000 UTC m=+1127.219937161" watchObservedRunningTime="2026-01-28 16:04:14.9483698 +0000 UTC m=+1127.224341311" Jan 28 16:04:20 crc kubenswrapper[4903]: I0128 16:04:20.546105 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 16:04:22 crc kubenswrapper[4903]: I0128 16:04:22.991063 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1423eabe-b2af-4a42-a38e-d5c1c53e7845","Type":"ContainerStarted","Data":"da1778879f20d0c4622f1b8c62b20be5cfe0c84babdca30cc0f05f8464fed3f0"} Jan 28 16:04:22 crc kubenswrapper[4903]: I0128 16:04:22.993713 4903 generic.go:334] "Generic (PLEG): container finished" podID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerID="440dbc2bb9f9bafe7da3797b181ac6fd61e61287c01229fa2c3e01029fad65ee" exitCode=0 Jan 28 16:04:22 crc kubenswrapper[4903]: I0128 16:04:22.994028 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sdvpf" event={"ID":"87970b20-51e0-4e11-875a-8dea3b633ac5","Type":"ContainerDied","Data":"440dbc2bb9f9bafe7da3797b181ac6fd61e61287c01229fa2c3e01029fad65ee"} Jan 28 16:04:22 crc kubenswrapper[4903]: I0128 16:04:22.997941 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g8tcr" event={"ID":"33a30cd9-7e56-4a30-8b2d-7786c742c248","Type":"ContainerStarted","Data":"d788fb6f80b15b1916c1e431397434ddb83e22295a82de80156a3e89366081b1"} Jan 28 16:04:22 crc kubenswrapper[4903]: I0128 16:04:22.998115 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-g8tcr" Jan 28 16:04:23 crc kubenswrapper[4903]: I0128 16:04:23.000930 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30b00809-4c91-4c35-b54a-46b5092fdc87","Type":"ContainerStarted","Data":"3ac4a9d51af634e41ab5f731ba48387b5e2e0cad4f76dfa9914df21ad083c9a5"} Jan 28 16:04:23 crc kubenswrapper[4903]: I0128 16:04:23.001848 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 16:04:23 crc kubenswrapper[4903]: I0128 16:04:23.004965 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"83fe52fb-0760-4173-9567-11d84b522c71","Type":"ContainerStarted","Data":"b068b0541444e9457126fbba0acffd002fec18d4cbec22a881a9621834e71d6d"} Jan 28 16:04:23 crc kubenswrapper[4903]: I0128 16:04:23.007470 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e9123e0-08c8-4892-8378-4f99799d7dfc","Type":"ContainerStarted","Data":"115db9d03452ef27c97e4292c7d8d47526c8e5ede6cf99f55017f73a5b5958ea"} Jan 28 16:04:23 crc kubenswrapper[4903]: I0128 16:04:23.009301 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9d45d584-dc21-48a4-842d-ab47fcfdd63d","Type":"ContainerStarted","Data":"b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b"} Jan 28 16:04:23 crc kubenswrapper[4903]: I0128 16:04:23.061890 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.005800984 podStartE2EDuration="26.061846917s" podCreationTimestamp="2026-01-28 16:03:57 +0000 UTC" firstStartedPulling="2026-01-28 16:04:10.32505516 +0000 UTC m=+1122.601026671" lastFinishedPulling="2026-01-28 16:04:22.381101083 +0000 UTC m=+1134.657072604" observedRunningTime="2026-01-28 16:04:23.05939736 +0000 UTC m=+1135.335368891" watchObservedRunningTime="2026-01-28 16:04:23.061846917 +0000 UTC m=+1135.337818438" Jan 28 16:04:23 crc kubenswrapper[4903]: I0128 16:04:23.081563 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g8tcr" podStartSLOduration=12.663719362 podStartE2EDuration="23.081521903s" podCreationTimestamp="2026-01-28 16:04:00 +0000 UTC" firstStartedPulling="2026-01-28 16:04:10.554212986 +0000 UTC m=+1122.830184497" lastFinishedPulling="2026-01-28 16:04:20.972015527 +0000 UTC m=+1133.247987038" observedRunningTime="2026-01-28 16:04:23.076412043 +0000 UTC m=+1135.352383554" watchObservedRunningTime="2026-01-28 16:04:23.081521903 +0000 UTC m=+1135.357493414" Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.017502 4903 generic.go:334] "Generic (PLEG): container finished" podID="69c99f72-85ce-4565-be21-569dee03cfdb" containerID="673421c033134b8591096cb37ea90bcdf3e34ba342442b56ccca8f4621542b66" exitCode=0 Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.017596 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" event={"ID":"69c99f72-85ce-4565-be21-569dee03cfdb","Type":"ContainerDied","Data":"673421c033134b8591096cb37ea90bcdf3e34ba342442b56ccca8f4621542b66"} Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.020603 4903 generic.go:334] "Generic (PLEG): container finished" podID="6729e676-e326-4dea-8632-01d8525ddd0a" containerID="852abcf45306f11dc241da90be47cf39d11b66c9a214ad07de82de682dfa1889" exitCode=0 Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.020663 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" event={"ID":"6729e676-e326-4dea-8632-01d8525ddd0a","Type":"ContainerDied","Data":"852abcf45306f11dc241da90be47cf39d11b66c9a214ad07de82de682dfa1889"} Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.024059 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sdvpf" event={"ID":"87970b20-51e0-4e11-875a-8dea3b633ac5","Type":"ContainerStarted","Data":"2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e"} Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.024095 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sdvpf" event={"ID":"87970b20-51e0-4e11-875a-8dea3b633ac5","Type":"ContainerStarted","Data":"7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d"} Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.024110 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.025157 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:24 crc kubenswrapper[4903]: I0128 16:04:24.077142 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-sdvpf" podStartSLOduration=15.431864549 podStartE2EDuration="24.077119928s" podCreationTimestamp="2026-01-28 16:04:00 +0000 UTC" firstStartedPulling="2026-01-28 16:04:12.246595773 +0000 UTC m=+1124.522567284" lastFinishedPulling="2026-01-28 16:04:20.891851152 +0000 UTC m=+1133.167822663" observedRunningTime="2026-01-28 16:04:24.073601622 +0000 UTC m=+1136.349573133" watchObservedRunningTime="2026-01-28 16:04:24.077119928 +0000 UTC m=+1136.353091449" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.040295 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" event={"ID":"69c99f72-85ce-4565-be21-569dee03cfdb","Type":"ContainerStarted","Data":"165836c25afe30e5405fe99faabbf2f6dec82b58de99a19c3c2b4643a0632a9b"} Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.041001 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.041977 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" event={"ID":"6729e676-e326-4dea-8632-01d8525ddd0a","Type":"ContainerStarted","Data":"347efed7f1d27a269593e4a100e9b0b3a16275c28ef9b7127c40dd994b87c474"} Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.042145 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.044334 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"83fe52fb-0760-4173-9567-11d84b522c71","Type":"ContainerStarted","Data":"ea094faee48284617c25b7bce901cd1d485c8c3eb065114f39cedd97df20a515"} Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.046548 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e9123e0-08c8-4892-8378-4f99799d7dfc","Type":"ContainerStarted","Data":"d8a74584b686d6ab5913a3d1a5bdaf5d4115fabca3b023a2faf39781ba497fbe"} Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.047891 4903 generic.go:334] "Generic (PLEG): container finished" podID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerID="da1778879f20d0c4622f1b8c62b20be5cfe0c84babdca30cc0f05f8464fed3f0" exitCode=0 Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.047974 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1423eabe-b2af-4a42-a38e-d5c1c53e7845","Type":"ContainerDied","Data":"da1778879f20d0c4622f1b8c62b20be5cfe0c84babdca30cc0f05f8464fed3f0"} Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.065106 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" podStartSLOduration=4.413809631 podStartE2EDuration="36.065088822s" podCreationTimestamp="2026-01-28 16:03:50 +0000 UTC" firstStartedPulling="2026-01-28 16:03:51.768667026 +0000 UTC m=+1104.044638537" lastFinishedPulling="2026-01-28 16:04:23.419946217 +0000 UTC m=+1135.695917728" observedRunningTime="2026-01-28 16:04:26.059603722 +0000 UTC m=+1138.335575253" watchObservedRunningTime="2026-01-28 16:04:26.065088822 +0000 UTC m=+1138.341060333" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.081879 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.955486943 podStartE2EDuration="24.081861449s" podCreationTimestamp="2026-01-28 16:04:02 +0000 UTC" firstStartedPulling="2026-01-28 16:04:10.663914846 +0000 UTC m=+1122.939886357" lastFinishedPulling="2026-01-28 16:04:25.790289352 +0000 UTC m=+1138.066260863" observedRunningTime="2026-01-28 16:04:26.075462764 +0000 UTC m=+1138.351434295" watchObservedRunningTime="2026-01-28 16:04:26.081861449 +0000 UTC m=+1138.357832960" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.102118 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.618030495 podStartE2EDuration="24.10210085s" podCreationTimestamp="2026-01-28 16:04:02 +0000 UTC" firstStartedPulling="2026-01-28 16:04:12.237577796 +0000 UTC m=+1124.513549307" lastFinishedPulling="2026-01-28 16:04:25.721648141 +0000 UTC m=+1137.997619662" observedRunningTime="2026-01-28 16:04:26.095711856 +0000 UTC m=+1138.371683367" watchObservedRunningTime="2026-01-28 16:04:26.10210085 +0000 UTC m=+1138.378072361" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.114904 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" podStartSLOduration=-9223372001.73989 podStartE2EDuration="35.114885798s" podCreationTimestamp="2026-01-28 16:03:51 +0000 UTC" firstStartedPulling="2026-01-28 16:03:52.04875188 +0000 UTC m=+1104.324723391" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:26.111048994 +0000 UTC m=+1138.387020515" watchObservedRunningTime="2026-01-28 16:04:26.114885798 +0000 UTC m=+1138.390857329" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.614247 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.614313 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.614359 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.615026 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c77af858064eabcd955be524624cd22b78fb67a11240b85f365bfaee93bd9c0"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:04:26 crc kubenswrapper[4903]: I0128 16:04:26.615085 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://6c77af858064eabcd955be524624cd22b78fb67a11240b85f365bfaee93bd9c0" gracePeriod=600 Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.057078 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1423eabe-b2af-4a42-a38e-d5c1c53e7845","Type":"ContainerStarted","Data":"794515d4b47b412812a3f26bee010ffe855a15147bcf38cac1153e75b984d927"} Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.063259 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="6c77af858064eabcd955be524624cd22b78fb67a11240b85f365bfaee93bd9c0" exitCode=0 Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.063330 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"6c77af858064eabcd955be524624cd22b78fb67a11240b85f365bfaee93bd9c0"} Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.063377 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"993067151bbc38bd867efd2a0048a350ec2c3e1b2fa7b3b79554189c276ba379"} Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.063394 4903 scope.go:117] "RemoveContainer" containerID="954d27b4dc9851fdaed58cb75beeee55d01523bc8e8b245b32b2ba4b08a3a068" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.066385 4903 generic.go:334] "Generic (PLEG): container finished" podID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerID="b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b" exitCode=0 Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.067002 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9d45d584-dc21-48a4-842d-ab47fcfdd63d","Type":"ContainerDied","Data":"b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b"} Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.108911 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.691877821 podStartE2EDuration="34.1088883s" podCreationTimestamp="2026-01-28 16:03:53 +0000 UTC" firstStartedPulling="2026-01-28 16:04:10.555169712 +0000 UTC m=+1122.831141213" lastFinishedPulling="2026-01-28 16:04:20.972180191 +0000 UTC m=+1133.248151692" observedRunningTime="2026-01-28 16:04:27.104867371 +0000 UTC m=+1139.380838892" watchObservedRunningTime="2026-01-28 16:04:27.1088883 +0000 UTC m=+1139.384859811" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.453621 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-jd9zd"] Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.471241 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.495510 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-v2tcm"] Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.533027 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-v2tcm"] Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.533133 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.652572 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.652631 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-config\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.652720 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwmn\" (UniqueName: \"kubernetes.io/projected/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-kube-api-access-pqwmn\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.743738 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.754575 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwmn\" (UniqueName: \"kubernetes.io/projected/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-kube-api-access-pqwmn\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.754648 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.754682 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-config\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.755617 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-config\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.756446 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.786643 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwmn\" (UniqueName: \"kubernetes.io/projected/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-kube-api-access-pqwmn\") pod \"dnsmasq-dns-7f9f9f545f-v2tcm\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.790952 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:27 crc kubenswrapper[4903]: I0128 16:04:27.857514 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.045370 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.075491 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9d45d584-dc21-48a4-842d-ab47fcfdd63d","Type":"ContainerStarted","Data":"0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9"} Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.081115 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.081895 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" containerName="dnsmasq-dns" containerID="cri-o://165836c25afe30e5405fe99faabbf2f6dec82b58de99a19c3c2b4643a0632a9b" gracePeriod=10 Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.088777 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.107831 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.784177263 podStartE2EDuration="36.107812697s" podCreationTimestamp="2026-01-28 16:03:52 +0000 UTC" firstStartedPulling="2026-01-28 16:04:10.497165251 +0000 UTC m=+1122.773136752" lastFinishedPulling="2026-01-28 16:04:20.820800675 +0000 UTC m=+1133.096772186" observedRunningTime="2026-01-28 16:04:28.103255673 +0000 UTC m=+1140.379227194" watchObservedRunningTime="2026-01-28 16:04:28.107812697 +0000 UTC m=+1140.383784208" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.136225 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.270276 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-v2tcm"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.409894 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-j7qqp"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.410149 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" containerName="dnsmasq-dns" containerID="cri-o://347efed7f1d27a269593e4a100e9b0b3a16275c28ef9b7127c40dd994b87c474" gracePeriod=10 Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.471732 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb874f4c9-w4w49"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.473339 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.483397 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.496743 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb874f4c9-w4w49"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.564096 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-sqdt2"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.565306 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.569200 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.574435 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwb6v\" (UniqueName: \"kubernetes.io/projected/0cc3b30b-780e-4ae6-a86a-41f029101eb8-kube-api-access-hwb6v\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.574650 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-dns-svc\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.574714 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-ovsdbserver-nb\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.574779 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-config\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.579252 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sqdt2"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.612494 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.620795 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.633167 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.636873 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.637193 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.637453 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hkqgj" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.649849 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676242 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwb6v\" (UniqueName: \"kubernetes.io/projected/0cc3b30b-780e-4ae6-a86a-41f029101eb8-kube-api-access-hwb6v\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676319 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-combined-ca-bundle\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676416 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-dns-svc\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676435 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-ovsdbserver-nb\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676468 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovn-rundir\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676501 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676522 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovs-rundir\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676557 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-config\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676671 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8080a17-9166-4721-868f-c43799472922-config\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.676704 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrdb\" (UniqueName: \"kubernetes.io/projected/c8080a17-9166-4721-868f-c43799472922-kube-api-access-fkrdb\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.677980 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-ovsdbserver-nb\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.678516 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-dns-svc\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.679480 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-config\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.741244 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwb6v\" (UniqueName: \"kubernetes.io/projected/0cc3b30b-780e-4ae6-a86a-41f029101eb8-kube-api-access-hwb6v\") pod \"dnsmasq-dns-cb874f4c9-w4w49\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779650 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-cache\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779721 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779750 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovs-rundir\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779784 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8080a17-9166-4721-868f-c43799472922-config\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779810 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrdb\" (UniqueName: \"kubernetes.io/projected/c8080a17-9166-4721-868f-c43799472922-kube-api-access-fkrdb\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779846 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779880 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsbxq\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-kube-api-access-bsbxq\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779907 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779951 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-combined-ca-bundle\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.779972 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-lock\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.780018 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe73c5e-1acc-4125-8ff9-e42b69488039-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.780039 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovn-rundir\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.780388 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovn-rundir\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.782430 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8080a17-9166-4721-868f-c43799472922-config\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.782523 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovs-rundir\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.785281 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.790137 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-combined-ca-bundle\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.807914 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.832397 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrdb\" (UniqueName: \"kubernetes.io/projected/c8080a17-9166-4721-868f-c43799472922-kube-api-access-fkrdb\") pod \"ovn-controller-metrics-sqdt2\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.843112 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-v2tcm"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.869624 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-85fn5"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.871450 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.875805 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.878413 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-85fn5"] Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.882923 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-cache\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.883025 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.883065 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsbxq\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-kube-api-access-bsbxq\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.883089 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.883134 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-lock\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.883176 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe73c5e-1acc-4125-8ff9-e42b69488039-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.883352 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: E0128 16:04:28.883408 4903 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 16:04:28 crc kubenswrapper[4903]: E0128 16:04:28.883432 4903 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 16:04:28 crc kubenswrapper[4903]: E0128 16:04:28.883488 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift podName:2fe73c5e-1acc-4125-8ff9-e42b69488039 nodeName:}" failed. No retries permitted until 2026-01-28 16:04:29.383467878 +0000 UTC m=+1141.659439389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift") pod "swift-storage-0" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039") : configmap "swift-ring-files" not found Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.883993 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-lock\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.884350 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-cache\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.900992 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe73c5e-1acc-4125-8ff9-e42b69488039-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.912015 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.914144 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsbxq\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-kube-api-access-bsbxq\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.915692 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.984919 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbms\" (UniqueName: \"kubernetes.io/projected/3eed0863-7a63-42a2-8f91-e98d60e5770f-kube-api-access-xfbms\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.985030 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.985088 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.985206 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-config\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:28 crc kubenswrapper[4903]: I0128 16:04:28.985272 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.045967 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.087466 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.087549 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-config\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.087651 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.087739 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbms\" (UniqueName: \"kubernetes.io/projected/3eed0863-7a63-42a2-8f91-e98d60e5770f-kube-api-access-xfbms\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.087806 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.089097 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.090024 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.090822 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.091009 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-config\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.113186 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.119158 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbms\" (UniqueName: \"kubernetes.io/projected/3eed0863-7a63-42a2-8f91-e98d60e5770f-kube-api-access-xfbms\") pod \"dnsmasq-dns-6cb545bd4c-85fn5\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.124715 4903 generic.go:334] "Generic (PLEG): container finished" podID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerID="0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9" exitCode=0 Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.124804 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" event={"ID":"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d","Type":"ContainerDied","Data":"0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9"} Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.124838 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" event={"ID":"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d","Type":"ContainerStarted","Data":"76fb5f40c35cf77899e44b7cb7eb84a5468e0696de5a90dad78fefc12d4d28a2"} Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.127552 4903 generic.go:334] "Generic (PLEG): container finished" podID="69c99f72-85ce-4565-be21-569dee03cfdb" containerID="165836c25afe30e5405fe99faabbf2f6dec82b58de99a19c3c2b4643a0632a9b" exitCode=0 Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.127611 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" event={"ID":"69c99f72-85ce-4565-be21-569dee03cfdb","Type":"ContainerDied","Data":"165836c25afe30e5405fe99faabbf2f6dec82b58de99a19c3c2b4643a0632a9b"} Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.162285 4903 generic.go:334] "Generic (PLEG): container finished" podID="6729e676-e326-4dea-8632-01d8525ddd0a" containerID="347efed7f1d27a269593e4a100e9b0b3a16275c28ef9b7127c40dd994b87c474" exitCode=0 Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.162637 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" event={"ID":"6729e676-e326-4dea-8632-01d8525ddd0a","Type":"ContainerDied","Data":"347efed7f1d27a269593e4a100e9b0b3a16275c28ef9b7127c40dd994b87c474"} Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.171044 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.193262 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.280120 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.299304 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-config\") pod \"6729e676-e326-4dea-8632-01d8525ddd0a\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.299417 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-dns-svc\") pod \"6729e676-e326-4dea-8632-01d8525ddd0a\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.299479 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dz8g\" (UniqueName: \"kubernetes.io/projected/6729e676-e326-4dea-8632-01d8525ddd0a-kube-api-access-6dz8g\") pod \"6729e676-e326-4dea-8632-01d8525ddd0a\" (UID: \"6729e676-e326-4dea-8632-01d8525ddd0a\") " Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.330802 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6729e676-e326-4dea-8632-01d8525ddd0a-kube-api-access-6dz8g" (OuterVolumeSpecName: "kube-api-access-6dz8g") pod "6729e676-e326-4dea-8632-01d8525ddd0a" (UID: "6729e676-e326-4dea-8632-01d8525ddd0a"). InnerVolumeSpecName "kube-api-access-6dz8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.400991 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-config\") pod \"69c99f72-85ce-4565-be21-569dee03cfdb\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.401132 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tht9\" (UniqueName: \"kubernetes.io/projected/69c99f72-85ce-4565-be21-569dee03cfdb-kube-api-access-7tht9\") pod \"69c99f72-85ce-4565-be21-569dee03cfdb\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.401158 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-dns-svc\") pod \"69c99f72-85ce-4565-be21-569dee03cfdb\" (UID: \"69c99f72-85ce-4565-be21-569dee03cfdb\") " Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.401457 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.401596 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dz8g\" (UniqueName: \"kubernetes.io/projected/6729e676-e326-4dea-8632-01d8525ddd0a-kube-api-access-6dz8g\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:29 crc kubenswrapper[4903]: E0128 16:04:29.401694 4903 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 16:04:29 crc kubenswrapper[4903]: E0128 16:04:29.401705 4903 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 16:04:29 crc kubenswrapper[4903]: E0128 16:04:29.401746 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift podName:2fe73c5e-1acc-4125-8ff9-e42b69488039 nodeName:}" failed. No retries permitted until 2026-01-28 16:04:30.401733024 +0000 UTC m=+1142.677704535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift") pod "swift-storage-0" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039") : configmap "swift-ring-files" not found Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.432598 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6729e676-e326-4dea-8632-01d8525ddd0a" (UID: "6729e676-e326-4dea-8632-01d8525ddd0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.434865 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c99f72-85ce-4565-be21-569dee03cfdb-kube-api-access-7tht9" (OuterVolumeSpecName: "kube-api-access-7tht9") pod "69c99f72-85ce-4565-be21-569dee03cfdb" (UID: "69c99f72-85ce-4565-be21-569dee03cfdb"). InnerVolumeSpecName "kube-api-access-7tht9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.447881 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-config" (OuterVolumeSpecName: "config") pod "6729e676-e326-4dea-8632-01d8525ddd0a" (UID: "6729e676-e326-4dea-8632-01d8525ddd0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.475453 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-config" (OuterVolumeSpecName: "config") pod "69c99f72-85ce-4565-be21-569dee03cfdb" (UID: "69c99f72-85ce-4565-be21-569dee03cfdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.477444 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 16:04:29 crc kubenswrapper[4903]: E0128 16:04:29.477845 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" containerName="dnsmasq-dns" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.477862 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" containerName="dnsmasq-dns" Jan 28 16:04:29 crc kubenswrapper[4903]: E0128 16:04:29.477876 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" containerName="init" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.477882 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" containerName="init" Jan 28 16:04:29 crc kubenswrapper[4903]: E0128 16:04:29.477908 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" containerName="init" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.477918 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" containerName="init" Jan 28 16:04:29 crc kubenswrapper[4903]: E0128 16:04:29.477934 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" containerName="dnsmasq-dns" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.477941 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" containerName="dnsmasq-dns" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.478129 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" containerName="dnsmasq-dns" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.478160 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" containerName="dnsmasq-dns" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.480143 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.482457 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.482817 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.483075 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wmdz6" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.483408 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.486156 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb874f4c9-w4w49"] Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.499403 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69c99f72-85ce-4565-be21-569dee03cfdb" (UID: "69c99f72-85ce-4565-be21-569dee03cfdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.502855 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tht9\" (UniqueName: \"kubernetes.io/projected/69c99f72-85ce-4565-be21-569dee03cfdb-kube-api-access-7tht9\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.502880 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.502889 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.502898 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6729e676-e326-4dea-8632-01d8525ddd0a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.502907 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c99f72-85ce-4565-be21-569dee03cfdb-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.504806 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.603953 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.604272 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.604325 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-scripts\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.604351 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-config\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.604372 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfd8\" (UniqueName: \"kubernetes.io/projected/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-kube-api-access-wtfd8\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.604405 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.604422 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.613252 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sqdt2"] Jan 28 16:04:29 crc kubenswrapper[4903]: W0128 16:04:29.616060 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8080a17_9166_4721_868f_c43799472922.slice/crio-773359f6f7505f538c002ad4062bfa1b612aeb5709fb9efdc39360e460746594 WatchSource:0}: Error finding container 773359f6f7505f538c002ad4062bfa1b612aeb5709fb9efdc39360e460746594: Status 404 returned error can't find the container with id 773359f6f7505f538c002ad4062bfa1b612aeb5709fb9efdc39360e460746594 Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772087 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772179 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-scripts\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-config\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772240 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtfd8\" (UniqueName: \"kubernetes.io/projected/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-kube-api-access-wtfd8\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772285 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772303 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772332 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.772859 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.775169 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-scripts\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.775398 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-config\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.776690 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.778794 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.783760 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.795847 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtfd8\" (UniqueName: \"kubernetes.io/projected/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-kube-api-access-wtfd8\") pod \"ovn-northd-0\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " pod="openstack/ovn-northd-0" Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.822386 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-85fn5"] Jan 28 16:04:29 crc kubenswrapper[4903]: W0128 16:04:29.834972 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eed0863_7a63_42a2_8f91_e98d60e5770f.slice/crio-1d03a50e873ae30975a3d6c387b26ac15b7ad7042a726774b1c77ede3ec8ae46 WatchSource:0}: Error finding container 1d03a50e873ae30975a3d6c387b26ac15b7ad7042a726774b1c77ede3ec8ae46: Status 404 returned error can't find the container with id 1d03a50e873ae30975a3d6c387b26ac15b7ad7042a726774b1c77ede3ec8ae46 Jan 28 16:04:29 crc kubenswrapper[4903]: I0128 16:04:29.940232 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.170201 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sqdt2" event={"ID":"c8080a17-9166-4721-868f-c43799472922","Type":"ContainerStarted","Data":"38013f51046f369b6687e2c5d59c171aa0431838ce787e24819e162b03bcc631"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.170483 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sqdt2" event={"ID":"c8080a17-9166-4721-868f-c43799472922","Type":"ContainerStarted","Data":"773359f6f7505f538c002ad4062bfa1b612aeb5709fb9efdc39360e460746594"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.178260 4903 generic.go:334] "Generic (PLEG): container finished" podID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerID="d945d7e5cbfd3ac32d39d07de6bfb3eb58d85c2b5b800ecfec188ed6dfba9be2" exitCode=0 Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.178367 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" event={"ID":"0cc3b30b-780e-4ae6-a86a-41f029101eb8","Type":"ContainerDied","Data":"d945d7e5cbfd3ac32d39d07de6bfb3eb58d85c2b5b800ecfec188ed6dfba9be2"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.178396 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" event={"ID":"0cc3b30b-780e-4ae6-a86a-41f029101eb8","Type":"ContainerStarted","Data":"9b309d23a9a50d20f647414afcb520cf1fb43b0cc82b51db1cb86fd5fb0d94b5"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.180982 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" event={"ID":"69c99f72-85ce-4565-be21-569dee03cfdb","Type":"ContainerDied","Data":"814df122720962f0805392c2a706b83b837fce56fabd7865e4655aa340add5c8"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.181046 4903 scope.go:117] "RemoveContainer" containerID="165836c25afe30e5405fe99faabbf2f6dec82b58de99a19c3c2b4643a0632a9b" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.181289 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-jd9zd" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.184424 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" event={"ID":"6729e676-e326-4dea-8632-01d8525ddd0a","Type":"ContainerDied","Data":"82c2c7b7e94e9e2231fa3fb835bb1470df4adc9ee12fc818c8f94a22b8ddb650"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.184909 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-j7qqp" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.188939 4903 generic.go:334] "Generic (PLEG): container finished" podID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerID="44806975fe6725d16529c77a438148c9ba42fe17b83fd84b083e81954aa8d5ae" exitCode=0 Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.189029 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" event={"ID":"3eed0863-7a63-42a2-8f91-e98d60e5770f","Type":"ContainerDied","Data":"44806975fe6725d16529c77a438148c9ba42fe17b83fd84b083e81954aa8d5ae"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.190213 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" event={"ID":"3eed0863-7a63-42a2-8f91-e98d60e5770f","Type":"ContainerStarted","Data":"1d03a50e873ae30975a3d6c387b26ac15b7ad7042a726774b1c77ede3ec8ae46"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.194545 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-sqdt2" podStartSLOduration=2.194512091 podStartE2EDuration="2.194512091s" podCreationTimestamp="2026-01-28 16:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:30.193045651 +0000 UTC m=+1142.469017162" watchObservedRunningTime="2026-01-28 16:04:30.194512091 +0000 UTC m=+1142.470483602" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.208700 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" podUID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerName="dnsmasq-dns" containerID="cri-o://6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1" gracePeriod=10 Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.208883 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" event={"ID":"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d","Type":"ContainerStarted","Data":"6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1"} Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.209894 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.238513 4903 scope.go:117] "RemoveContainer" containerID="673421c033134b8591096cb37ea90bcdf3e34ba342442b56ccca8f4621542b66" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.265768 4903 scope.go:117] "RemoveContainer" containerID="347efed7f1d27a269593e4a100e9b0b3a16275c28ef9b7127c40dd994b87c474" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.299470 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-j7qqp"] Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.316091 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-j7qqp"] Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.331319 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-jd9zd"] Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.336742 4903 scope.go:117] "RemoveContainer" containerID="852abcf45306f11dc241da90be47cf39d11b66c9a214ad07de82de682dfa1889" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.341926 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-jd9zd"] Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.343095 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" podStartSLOduration=3.34307263 podStartE2EDuration="3.34307263s" podCreationTimestamp="2026-01-28 16:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:30.316059064 +0000 UTC m=+1142.592030585" watchObservedRunningTime="2026-01-28 16:04:30.34307263 +0000 UTC m=+1142.619044141" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.386506 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.435793 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6729e676-e326-4dea-8632-01d8525ddd0a" path="/var/lib/kubelet/pods/6729e676-e326-4dea-8632-01d8525ddd0a/volumes" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.436332 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c99f72-85ce-4565-be21-569dee03cfdb" path="/var/lib/kubelet/pods/69c99f72-85ce-4565-be21-569dee03cfdb/volumes" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.494272 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:30 crc kubenswrapper[4903]: E0128 16:04:30.494643 4903 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 16:04:30 crc kubenswrapper[4903]: E0128 16:04:30.494661 4903 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 16:04:30 crc kubenswrapper[4903]: E0128 16:04:30.494712 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift podName:2fe73c5e-1acc-4125-8ff9-e42b69488039 nodeName:}" failed. No retries permitted until 2026-01-28 16:04:32.494695253 +0000 UTC m=+1144.770666764 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift") pod "swift-storage-0" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039") : configmap "swift-ring-files" not found Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.708755 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.798977 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-dns-svc\") pod \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.799027 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqwmn\" (UniqueName: \"kubernetes.io/projected/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-kube-api-access-pqwmn\") pod \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.799215 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-config\") pod \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\" (UID: \"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d\") " Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.830057 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-kube-api-access-pqwmn" (OuterVolumeSpecName: "kube-api-access-pqwmn") pod "d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" (UID: "d4c25d42-e33a-4c9f-9c45-6a49ad9df35d"). InnerVolumeSpecName "kube-api-access-pqwmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.847203 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" (UID: "d4c25d42-e33a-4c9f-9c45-6a49ad9df35d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.858226 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-config" (OuterVolumeSpecName: "config") pod "d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" (UID: "d4c25d42-e33a-4c9f-9c45-6a49ad9df35d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.901333 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.901368 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqwmn\" (UniqueName: \"kubernetes.io/projected/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-kube-api-access-pqwmn\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:30 crc kubenswrapper[4903]: I0128 16:04:30.901379 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.219212 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe","Type":"ContainerStarted","Data":"33013a2f4fd6712d7e2d09406c9ee9ba685fde8f46cb3dc4be4414af99408a01"} Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.221469 4903 generic.go:334] "Generic (PLEG): container finished" podID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerID="6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1" exitCode=0 Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.221548 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.221544 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" event={"ID":"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d","Type":"ContainerDied","Data":"6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1"} Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.221607 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-v2tcm" event={"ID":"d4c25d42-e33a-4c9f-9c45-6a49ad9df35d","Type":"ContainerDied","Data":"76fb5f40c35cf77899e44b7cb7eb84a5468e0696de5a90dad78fefc12d4d28a2"} Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.221630 4903 scope.go:117] "RemoveContainer" containerID="6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1" Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.249913 4903 scope.go:117] "RemoveContainer" containerID="0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9" Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.263501 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-v2tcm"] Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.270887 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-v2tcm"] Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.279900 4903 scope.go:117] "RemoveContainer" containerID="6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1" Jan 28 16:04:31 crc kubenswrapper[4903]: E0128 16:04:31.280290 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1\": container with ID starting with 6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1 not found: ID does not exist" containerID="6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1" Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.280341 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1"} err="failed to get container status \"6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1\": rpc error: code = NotFound desc = could not find container \"6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1\": container with ID starting with 6e8b84f05b20a0aa087597d2302528f002f28ba721a94a7bb4bcb77b8b696eb1 not found: ID does not exist" Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.280365 4903 scope.go:117] "RemoveContainer" containerID="0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9" Jan 28 16:04:31 crc kubenswrapper[4903]: E0128 16:04:31.280812 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9\": container with ID starting with 0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9 not found: ID does not exist" containerID="0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9" Jan 28 16:04:31 crc kubenswrapper[4903]: I0128 16:04:31.280842 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9"} err="failed to get container status \"0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9\": rpc error: code = NotFound desc = could not find container \"0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9\": container with ID starting with 0e5a2401437ac57bd118ce691b0edc407ef50c2aebfbe5a469ff19ae97f995c9 not found: ID does not exist" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.423959 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" path="/var/lib/kubelet/pods/d4c25d42-e33a-4c9f-9c45-6a49ad9df35d/volumes" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.527989 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:32 crc kubenswrapper[4903]: E0128 16:04:32.528224 4903 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 16:04:32 crc kubenswrapper[4903]: E0128 16:04:32.528252 4903 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 16:04:32 crc kubenswrapper[4903]: E0128 16:04:32.528308 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift podName:2fe73c5e-1acc-4125-8ff9-e42b69488039 nodeName:}" failed. No retries permitted until 2026-01-28 16:04:36.528287419 +0000 UTC m=+1148.804258930 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift") pod "swift-storage-0" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039") : configmap "swift-ring-files" not found Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.551438 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gxgmt"] Jan 28 16:04:32 crc kubenswrapper[4903]: E0128 16:04:32.552310 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerName="dnsmasq-dns" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.552458 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerName="dnsmasq-dns" Jan 28 16:04:32 crc kubenswrapper[4903]: E0128 16:04:32.552608 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerName="init" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.552718 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerName="init" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.553135 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c25d42-e33a-4c9f-9c45-6a49ad9df35d" containerName="dnsmasq-dns" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.597505 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.600077 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.601703 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.601902 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.606400 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gxgmt"] Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.731379 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-scripts\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.731447 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-ring-data-devices\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.731502 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-combined-ca-bundle\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.731567 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2cl\" (UniqueName: \"kubernetes.io/projected/d4dbcd08-6def-4380-8cc4-93a156624deb-kube-api-access-mx2cl\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.731721 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-swiftconf\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.731760 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-dispersionconf\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.731814 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d4dbcd08-6def-4380-8cc4-93a156624deb-etc-swift\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.833818 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-dispersionconf\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.834287 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d4dbcd08-6def-4380-8cc4-93a156624deb-etc-swift\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.834469 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-scripts\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.834606 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-ring-data-devices\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.834713 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-combined-ca-bundle\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.834834 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx2cl\" (UniqueName: \"kubernetes.io/projected/d4dbcd08-6def-4380-8cc4-93a156624deb-kube-api-access-mx2cl\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.834957 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d4dbcd08-6def-4380-8cc4-93a156624deb-etc-swift\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.835110 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-swiftconf\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.835818 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-ring-data-devices\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.837423 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-scripts\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.840785 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-dispersionconf\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.841879 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-swiftconf\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.842576 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-combined-ca-bundle\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.861166 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx2cl\" (UniqueName: \"kubernetes.io/projected/d4dbcd08-6def-4380-8cc4-93a156624deb-kube-api-access-mx2cl\") pod \"swift-ring-rebalance-gxgmt\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:32 crc kubenswrapper[4903]: I0128 16:04:32.912268 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:33 crc kubenswrapper[4903]: I0128 16:04:33.267348 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gxgmt"] Jan 28 16:04:33 crc kubenswrapper[4903]: I0128 16:04:33.795124 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 16:04:33 crc kubenswrapper[4903]: I0128 16:04:33.795481 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 16:04:34 crc kubenswrapper[4903]: I0128 16:04:34.253340 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gxgmt" event={"ID":"d4dbcd08-6def-4380-8cc4-93a156624deb","Type":"ContainerStarted","Data":"168dc3ae9f50083ed890b4e0d4172c2474e569f5d06d3c0d1f6945047f3ae97a"} Jan 28 16:04:35 crc kubenswrapper[4903]: I0128 16:04:35.164095 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 16:04:35 crc kubenswrapper[4903]: I0128 16:04:35.164146 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 16:04:36 crc kubenswrapper[4903]: I0128 16:04:36.602449 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:36 crc kubenswrapper[4903]: E0128 16:04:36.602678 4903 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 16:04:36 crc kubenswrapper[4903]: E0128 16:04:36.602705 4903 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 16:04:36 crc kubenswrapper[4903]: E0128 16:04:36.602755 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift podName:2fe73c5e-1acc-4125-8ff9-e42b69488039 nodeName:}" failed. No retries permitted until 2026-01-28 16:04:44.602738911 +0000 UTC m=+1156.878710422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift") pod "swift-storage-0" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039") : configmap "swift-ring-files" not found Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.288373 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" event={"ID":"0cc3b30b-780e-4ae6-a86a-41f029101eb8","Type":"ContainerStarted","Data":"331cbe01572bd4ac44fc63da32e293dd5851dde4de002ec71038e972a3803775"} Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.288761 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.291295 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" event={"ID":"3eed0863-7a63-42a2-8f91-e98d60e5770f","Type":"ContainerStarted","Data":"aa4f7f08c087fdda6c2798cace6400d2c72036cdc1120ab22fce52d20d57f338"} Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.291495 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.309010 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" podStartSLOduration=10.308991576 podStartE2EDuration="10.308991576s" podCreationTimestamp="2026-01-28 16:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:38.306620251 +0000 UTC m=+1150.582591762" watchObservedRunningTime="2026-01-28 16:04:38.308991576 +0000 UTC m=+1150.584963107" Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.318274 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.330959 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" podStartSLOduration=10.33080244 podStartE2EDuration="10.33080244s" podCreationTimestamp="2026-01-28 16:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:38.324480198 +0000 UTC m=+1150.600451709" watchObservedRunningTime="2026-01-28 16:04:38.33080244 +0000 UTC m=+1150.606773951" Jan 28 16:04:38 crc kubenswrapper[4903]: I0128 16:04:38.390287 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 16:04:39 crc kubenswrapper[4903]: I0128 16:04:39.316779 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe","Type":"ContainerStarted","Data":"858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8"} Jan 28 16:04:39 crc kubenswrapper[4903]: I0128 16:04:39.317623 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 16:04:39 crc kubenswrapper[4903]: I0128 16:04:39.317638 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe","Type":"ContainerStarted","Data":"a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9"} Jan 28 16:04:39 crc kubenswrapper[4903]: I0128 16:04:39.347376 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.886040468 podStartE2EDuration="10.347358306s" podCreationTimestamp="2026-01-28 16:04:29 +0000 UTC" firstStartedPulling="2026-01-28 16:04:30.427632815 +0000 UTC m=+1142.703604326" lastFinishedPulling="2026-01-28 16:04:38.888950653 +0000 UTC m=+1151.164922164" observedRunningTime="2026-01-28 16:04:39.339264096 +0000 UTC m=+1151.615235597" watchObservedRunningTime="2026-01-28 16:04:39.347358306 +0000 UTC m=+1151.623329817" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.395833 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.480599 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.787892 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-19a1-account-create-update-zgvgv"] Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.789497 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.794058 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.800804 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cmxmp"] Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.801899 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.810838 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-19a1-account-create-update-zgvgv"] Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.852618 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cmxmp"] Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.886034 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbb48c98-8877-4be3-b406-096222fd33e6-operator-scripts\") pod \"glance-19a1-account-create-update-zgvgv\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.886110 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6w2k\" (UniqueName: \"kubernetes.io/projected/c841b377-a95f-4533-bcd3-4f5a53a36301-kube-api-access-d6w2k\") pod \"glance-db-create-cmxmp\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.886317 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-249qb\" (UniqueName: \"kubernetes.io/projected/fbb48c98-8877-4be3-b406-096222fd33e6-kube-api-access-249qb\") pod \"glance-19a1-account-create-update-zgvgv\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.886432 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c841b377-a95f-4533-bcd3-4f5a53a36301-operator-scripts\") pod \"glance-db-create-cmxmp\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.988265 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c841b377-a95f-4533-bcd3-4f5a53a36301-operator-scripts\") pod \"glance-db-create-cmxmp\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.988331 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbb48c98-8877-4be3-b406-096222fd33e6-operator-scripts\") pod \"glance-19a1-account-create-update-zgvgv\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.988369 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6w2k\" (UniqueName: \"kubernetes.io/projected/c841b377-a95f-4533-bcd3-4f5a53a36301-kube-api-access-d6w2k\") pod \"glance-db-create-cmxmp\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.988433 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-249qb\" (UniqueName: \"kubernetes.io/projected/fbb48c98-8877-4be3-b406-096222fd33e6-kube-api-access-249qb\") pod \"glance-19a1-account-create-update-zgvgv\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.989360 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c841b377-a95f-4533-bcd3-4f5a53a36301-operator-scripts\") pod \"glance-db-create-cmxmp\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:40 crc kubenswrapper[4903]: I0128 16:04:40.989866 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbb48c98-8877-4be3-b406-096222fd33e6-operator-scripts\") pod \"glance-19a1-account-create-update-zgvgv\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:41 crc kubenswrapper[4903]: I0128 16:04:41.005641 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-249qb\" (UniqueName: \"kubernetes.io/projected/fbb48c98-8877-4be3-b406-096222fd33e6-kube-api-access-249qb\") pod \"glance-19a1-account-create-update-zgvgv\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:41 crc kubenswrapper[4903]: I0128 16:04:41.014412 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6w2k\" (UniqueName: \"kubernetes.io/projected/c841b377-a95f-4533-bcd3-4f5a53a36301-kube-api-access-d6w2k\") pod \"glance-db-create-cmxmp\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:41 crc kubenswrapper[4903]: I0128 16:04:41.108841 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:41 crc kubenswrapper[4903]: I0128 16:04:41.131650 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:41 crc kubenswrapper[4903]: I0128 16:04:41.850384 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cmxmp"] Jan 28 16:04:41 crc kubenswrapper[4903]: W0128 16:04:41.854703 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc841b377_a95f_4533_bcd3_4f5a53a36301.slice/crio-11ad50417821933460da73b757f34c90cb7608061f4bf8b73e25ce46de158498 WatchSource:0}: Error finding container 11ad50417821933460da73b757f34c90cb7608061f4bf8b73e25ce46de158498: Status 404 returned error can't find the container with id 11ad50417821933460da73b757f34c90cb7608061f4bf8b73e25ce46de158498 Jan 28 16:04:41 crc kubenswrapper[4903]: I0128 16:04:41.941147 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-19a1-account-create-update-zgvgv"] Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.339953 4903 generic.go:334] "Generic (PLEG): container finished" podID="c841b377-a95f-4533-bcd3-4f5a53a36301" containerID="9b433c7bd6b0342ec2ec13718d0984a80ca303c5b8c24a199c67fdf90da8fac2" exitCode=0 Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.339993 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cmxmp" event={"ID":"c841b377-a95f-4533-bcd3-4f5a53a36301","Type":"ContainerDied","Data":"9b433c7bd6b0342ec2ec13718d0984a80ca303c5b8c24a199c67fdf90da8fac2"} Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.340024 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cmxmp" event={"ID":"c841b377-a95f-4533-bcd3-4f5a53a36301","Type":"ContainerStarted","Data":"11ad50417821933460da73b757f34c90cb7608061f4bf8b73e25ce46de158498"} Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.342598 4903 generic.go:334] "Generic (PLEG): container finished" podID="fbb48c98-8877-4be3-b406-096222fd33e6" containerID="975b2667bbd07ddf57bd0f6d098ed253d88ff1fdcab71160b786c8ff77db9693" exitCode=0 Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.342658 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-19a1-account-create-update-zgvgv" event={"ID":"fbb48c98-8877-4be3-b406-096222fd33e6","Type":"ContainerDied","Data":"975b2667bbd07ddf57bd0f6d098ed253d88ff1fdcab71160b786c8ff77db9693"} Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.342676 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-19a1-account-create-update-zgvgv" event={"ID":"fbb48c98-8877-4be3-b406-096222fd33e6","Type":"ContainerStarted","Data":"a3b6a74b1559ebbc8752b8f63f449ef5685ae28d015832f96f79450b7bbfb663"} Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.343944 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gxgmt" event={"ID":"d4dbcd08-6def-4380-8cc4-93a156624deb","Type":"ContainerStarted","Data":"0d1ea7e821d03a7c32ddf82f43f1bb77a4c18f114b371b772fdfa930d4a338f5"} Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.377744 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gxgmt" podStartSLOduration=2.227973846 podStartE2EDuration="10.377724391s" podCreationTimestamp="2026-01-28 16:04:32 +0000 UTC" firstStartedPulling="2026-01-28 16:04:33.269021029 +0000 UTC m=+1145.544992540" lastFinishedPulling="2026-01-28 16:04:41.418771564 +0000 UTC m=+1153.694743085" observedRunningTime="2026-01-28 16:04:42.373855226 +0000 UTC m=+1154.649826737" watchObservedRunningTime="2026-01-28 16:04:42.377724391 +0000 UTC m=+1154.653695902" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.428992 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r9tdb"] Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.429891 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.431736 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.433164 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r9tdb"] Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.521814 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30ccc7e-ffc7-4072-b872-f243529d9ab5-operator-scripts\") pod \"root-account-create-update-r9tdb\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.522143 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf4jp\" (UniqueName: \"kubernetes.io/projected/a30ccc7e-ffc7-4072-b872-f243529d9ab5-kube-api-access-rf4jp\") pod \"root-account-create-update-r9tdb\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.624061 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf4jp\" (UniqueName: \"kubernetes.io/projected/a30ccc7e-ffc7-4072-b872-f243529d9ab5-kube-api-access-rf4jp\") pod \"root-account-create-update-r9tdb\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.624427 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30ccc7e-ffc7-4072-b872-f243529d9ab5-operator-scripts\") pod \"root-account-create-update-r9tdb\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.625382 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30ccc7e-ffc7-4072-b872-f243529d9ab5-operator-scripts\") pod \"root-account-create-update-r9tdb\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.658354 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf4jp\" (UniqueName: \"kubernetes.io/projected/a30ccc7e-ffc7-4072-b872-f243529d9ab5-kube-api-access-rf4jp\") pod \"root-account-create-update-r9tdb\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.742985 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:42 crc kubenswrapper[4903]: I0128 16:04:42.970663 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r9tdb"] Jan 28 16:04:42 crc kubenswrapper[4903]: W0128 16:04:42.976514 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda30ccc7e_ffc7_4072_b872_f243529d9ab5.slice/crio-1714cf8fc9a42482dd0c5b03b07bd9604aa5335f1518798947b2bff58f2d4bb6 WatchSource:0}: Error finding container 1714cf8fc9a42482dd0c5b03b07bd9604aa5335f1518798947b2bff58f2d4bb6: Status 404 returned error can't find the container with id 1714cf8fc9a42482dd0c5b03b07bd9604aa5335f1518798947b2bff58f2d4bb6 Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.350738 4903 generic.go:334] "Generic (PLEG): container finished" podID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerID="03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10" exitCode=0 Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.350783 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cee6442c-f9ef-4902-b6ec-2bc01a904849","Type":"ContainerDied","Data":"03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10"} Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.352323 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9tdb" event={"ID":"a30ccc7e-ffc7-4072-b872-f243529d9ab5","Type":"ContainerStarted","Data":"75cf84f6bdd8f7c3ccdacd4f16f7ccf2eb0b296a54dbd45763753d9cfa08eb89"} Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.352359 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9tdb" event={"ID":"a30ccc7e-ffc7-4072-b872-f243529d9ab5","Type":"ContainerStarted","Data":"1714cf8fc9a42482dd0c5b03b07bd9604aa5335f1518798947b2bff58f2d4bb6"} Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.355564 4903 generic.go:334] "Generic (PLEG): container finished" podID="bb51034c-4387-4aba-8eff-6ff960538da9" containerID="35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2" exitCode=0 Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.355756 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb51034c-4387-4aba-8eff-6ff960538da9","Type":"ContainerDied","Data":"35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2"} Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.419502 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-r9tdb" podStartSLOduration=1.419476345 podStartE2EDuration="1.419476345s" podCreationTimestamp="2026-01-28 16:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:43.406405128 +0000 UTC m=+1155.682376639" watchObservedRunningTime="2026-01-28 16:04:43.419476345 +0000 UTC m=+1155.695447856" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.706827 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.749671 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.751376 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6w2k\" (UniqueName: \"kubernetes.io/projected/c841b377-a95f-4533-bcd3-4f5a53a36301-kube-api-access-d6w2k\") pod \"c841b377-a95f-4533-bcd3-4f5a53a36301\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.751648 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c841b377-a95f-4533-bcd3-4f5a53a36301-operator-scripts\") pod \"c841b377-a95f-4533-bcd3-4f5a53a36301\" (UID: \"c841b377-a95f-4533-bcd3-4f5a53a36301\") " Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.752912 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c841b377-a95f-4533-bcd3-4f5a53a36301-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c841b377-a95f-4533-bcd3-4f5a53a36301" (UID: "c841b377-a95f-4533-bcd3-4f5a53a36301"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.760426 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c841b377-a95f-4533-bcd3-4f5a53a36301-kube-api-access-d6w2k" (OuterVolumeSpecName: "kube-api-access-d6w2k") pod "c841b377-a95f-4533-bcd3-4f5a53a36301" (UID: "c841b377-a95f-4533-bcd3-4f5a53a36301"). InnerVolumeSpecName "kube-api-access-d6w2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.811752 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.853492 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249qb\" (UniqueName: \"kubernetes.io/projected/fbb48c98-8877-4be3-b406-096222fd33e6-kube-api-access-249qb\") pod \"fbb48c98-8877-4be3-b406-096222fd33e6\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.853610 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbb48c98-8877-4be3-b406-096222fd33e6-operator-scripts\") pod \"fbb48c98-8877-4be3-b406-096222fd33e6\" (UID: \"fbb48c98-8877-4be3-b406-096222fd33e6\") " Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.854096 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c841b377-a95f-4533-bcd3-4f5a53a36301-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.854119 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6w2k\" (UniqueName: \"kubernetes.io/projected/c841b377-a95f-4533-bcd3-4f5a53a36301-kube-api-access-d6w2k\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.854215 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbb48c98-8877-4be3-b406-096222fd33e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbb48c98-8877-4be3-b406-096222fd33e6" (UID: "fbb48c98-8877-4be3-b406-096222fd33e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.860917 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb48c98-8877-4be3-b406-096222fd33e6-kube-api-access-249qb" (OuterVolumeSpecName: "kube-api-access-249qb") pod "fbb48c98-8877-4be3-b406-096222fd33e6" (UID: "fbb48c98-8877-4be3-b406-096222fd33e6"). InnerVolumeSpecName "kube-api-access-249qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.956035 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249qb\" (UniqueName: \"kubernetes.io/projected/fbb48c98-8877-4be3-b406-096222fd33e6-kube-api-access-249qb\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:43 crc kubenswrapper[4903]: I0128 16:04:43.956066 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbb48c98-8877-4be3-b406-096222fd33e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.196690 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.279671 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb874f4c9-w4w49"] Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.364384 4903 generic.go:334] "Generic (PLEG): container finished" podID="a30ccc7e-ffc7-4072-b872-f243529d9ab5" containerID="75cf84f6bdd8f7c3ccdacd4f16f7ccf2eb0b296a54dbd45763753d9cfa08eb89" exitCode=0 Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.364446 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9tdb" event={"ID":"a30ccc7e-ffc7-4072-b872-f243529d9ab5","Type":"ContainerDied","Data":"75cf84f6bdd8f7c3ccdacd4f16f7ccf2eb0b296a54dbd45763753d9cfa08eb89"} Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.365946 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-19a1-account-create-update-zgvgv" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.365950 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-19a1-account-create-update-zgvgv" event={"ID":"fbb48c98-8877-4be3-b406-096222fd33e6","Type":"ContainerDied","Data":"a3b6a74b1559ebbc8752b8f63f449ef5685ae28d015832f96f79450b7bbfb663"} Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.366091 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b6a74b1559ebbc8752b8f63f449ef5685ae28d015832f96f79450b7bbfb663" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.368768 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb51034c-4387-4aba-8eff-6ff960538da9","Type":"ContainerStarted","Data":"3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef"} Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.368985 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.370386 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cmxmp" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.370403 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cmxmp" event={"ID":"c841b377-a95f-4533-bcd3-4f5a53a36301","Type":"ContainerDied","Data":"11ad50417821933460da73b757f34c90cb7608061f4bf8b73e25ce46de158498"} Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.370612 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11ad50417821933460da73b757f34c90cb7608061f4bf8b73e25ce46de158498" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.372591 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cee6442c-f9ef-4902-b6ec-2bc01a904849","Type":"ContainerStarted","Data":"cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee"} Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.372723 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" podUID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerName="dnsmasq-dns" containerID="cri-o://331cbe01572bd4ac44fc63da32e293dd5851dde4de002ec71038e972a3803775" gracePeriod=10 Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.372848 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.422249 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.556404758 podStartE2EDuration="53.422181414s" podCreationTimestamp="2026-01-28 16:03:51 +0000 UTC" firstStartedPulling="2026-01-28 16:03:52.973169155 +0000 UTC m=+1105.249140666" lastFinishedPulling="2026-01-28 16:04:09.838945811 +0000 UTC m=+1122.114917322" observedRunningTime="2026-01-28 16:04:44.415712757 +0000 UTC m=+1156.691684288" watchObservedRunningTime="2026-01-28 16:04:44.422181414 +0000 UTC m=+1156.698152935" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.452669 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.334041998000004 podStartE2EDuration="54.452643914s" podCreationTimestamp="2026-01-28 16:03:50 +0000 UTC" firstStartedPulling="2026-01-28 16:03:52.798786113 +0000 UTC m=+1105.074757624" lastFinishedPulling="2026-01-28 16:04:09.917388029 +0000 UTC m=+1122.193359540" observedRunningTime="2026-01-28 16:04:44.451805861 +0000 UTC m=+1156.727777392" watchObservedRunningTime="2026-01-28 16:04:44.452643914 +0000 UTC m=+1156.728615425" Jan 28 16:04:44 crc kubenswrapper[4903]: I0128 16:04:44.666504 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:04:44 crc kubenswrapper[4903]: E0128 16:04:44.666759 4903 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 16:04:44 crc kubenswrapper[4903]: E0128 16:04:44.666821 4903 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 16:04:44 crc kubenswrapper[4903]: E0128 16:04:44.667022 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift podName:2fe73c5e-1acc-4125-8ff9-e42b69488039 nodeName:}" failed. No retries permitted until 2026-01-28 16:05:00.666999627 +0000 UTC m=+1172.942971138 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift") pod "swift-storage-0" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039") : configmap "swift-ring-files" not found Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.091928 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5lmfj"] Jan 28 16:04:45 crc kubenswrapper[4903]: E0128 16:04:45.093043 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb48c98-8877-4be3-b406-096222fd33e6" containerName="mariadb-account-create-update" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.093282 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb48c98-8877-4be3-b406-096222fd33e6" containerName="mariadb-account-create-update" Jan 28 16:04:45 crc kubenswrapper[4903]: E0128 16:04:45.093357 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c841b377-a95f-4533-bcd3-4f5a53a36301" containerName="mariadb-database-create" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.093368 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c841b377-a95f-4533-bcd3-4f5a53a36301" containerName="mariadb-database-create" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.094112 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb48c98-8877-4be3-b406-096222fd33e6" containerName="mariadb-account-create-update" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.094141 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c841b377-a95f-4533-bcd3-4f5a53a36301" containerName="mariadb-database-create" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.095900 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.147627 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5lmfj"] Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.174279 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c586e0-c175-4b87-9464-b44649a8eb10-operator-scripts\") pod \"keystone-db-create-5lmfj\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.174354 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w79q7\" (UniqueName: \"kubernetes.io/projected/57c586e0-c175-4b87-9464-b44649a8eb10-kube-api-access-w79q7\") pod \"keystone-db-create-5lmfj\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.222949 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fd05-account-create-update-vs6jd"] Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.225399 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.231127 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.239439 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd05-account-create-update-vs6jd"] Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.276095 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w79q7\" (UniqueName: \"kubernetes.io/projected/57c586e0-c175-4b87-9464-b44649a8eb10-kube-api-access-w79q7\") pod \"keystone-db-create-5lmfj\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.276189 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f309ffd-6cba-4804-b3d5-114c4cad07bc-operator-scripts\") pod \"keystone-fd05-account-create-update-vs6jd\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.276236 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrq7\" (UniqueName: \"kubernetes.io/projected/7f309ffd-6cba-4804-b3d5-114c4cad07bc-kube-api-access-xdrq7\") pod \"keystone-fd05-account-create-update-vs6jd\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.276287 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c586e0-c175-4b87-9464-b44649a8eb10-operator-scripts\") pod \"keystone-db-create-5lmfj\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.277265 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c586e0-c175-4b87-9464-b44649a8eb10-operator-scripts\") pod \"keystone-db-create-5lmfj\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.298710 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w79q7\" (UniqueName: \"kubernetes.io/projected/57c586e0-c175-4b87-9464-b44649a8eb10-kube-api-access-w79q7\") pod \"keystone-db-create-5lmfj\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.378304 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f309ffd-6cba-4804-b3d5-114c4cad07bc-operator-scripts\") pod \"keystone-fd05-account-create-update-vs6jd\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.378412 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrq7\" (UniqueName: \"kubernetes.io/projected/7f309ffd-6cba-4804-b3d5-114c4cad07bc-kube-api-access-xdrq7\") pod \"keystone-fd05-account-create-update-vs6jd\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.379256 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f309ffd-6cba-4804-b3d5-114c4cad07bc-operator-scripts\") pod \"keystone-fd05-account-create-update-vs6jd\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.380909 4903 generic.go:334] "Generic (PLEG): container finished" podID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerID="331cbe01572bd4ac44fc63da32e293dd5851dde4de002ec71038e972a3803775" exitCode=0 Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.381757 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" event={"ID":"0cc3b30b-780e-4ae6-a86a-41f029101eb8","Type":"ContainerDied","Data":"331cbe01572bd4ac44fc63da32e293dd5851dde4de002ec71038e972a3803775"} Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.414928 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.424954 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-tw4vv"] Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.425920 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.433163 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrq7\" (UniqueName: \"kubernetes.io/projected/7f309ffd-6cba-4804-b3d5-114c4cad07bc-kube-api-access-xdrq7\") pod \"keystone-fd05-account-create-update-vs6jd\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.480306 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjwhz\" (UniqueName: \"kubernetes.io/projected/6c325698-a4a2-4f1b-a865-e37be6610791-kube-api-access-hjwhz\") pod \"placement-db-create-tw4vv\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.483172 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c325698-a4a2-4f1b-a865-e37be6610791-operator-scripts\") pod \"placement-db-create-tw4vv\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.483766 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tw4vv"] Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.548247 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.563861 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-22c6-account-create-update-6xb8c"] Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.565994 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.573217 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.576374 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22c6-account-create-update-6xb8c"] Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.590390 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjwhz\" (UniqueName: \"kubernetes.io/projected/6c325698-a4a2-4f1b-a865-e37be6610791-kube-api-access-hjwhz\") pod \"placement-db-create-tw4vv\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.590517 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15daf8e2-37c9-4468-85f9-8f47719805c3-operator-scripts\") pod \"placement-22c6-account-create-update-6xb8c\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.590802 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgj6l\" (UniqueName: \"kubernetes.io/projected/15daf8e2-37c9-4468-85f9-8f47719805c3-kube-api-access-jgj6l\") pod \"placement-22c6-account-create-update-6xb8c\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.590985 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c325698-a4a2-4f1b-a865-e37be6610791-operator-scripts\") pod \"placement-db-create-tw4vv\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.600230 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c325698-a4a2-4f1b-a865-e37be6610791-operator-scripts\") pod \"placement-db-create-tw4vv\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.611982 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjwhz\" (UniqueName: \"kubernetes.io/projected/6c325698-a4a2-4f1b-a865-e37be6610791-kube-api-access-hjwhz\") pod \"placement-db-create-tw4vv\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.692493 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.693152 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgj6l\" (UniqueName: \"kubernetes.io/projected/15daf8e2-37c9-4468-85f9-8f47719805c3-kube-api-access-jgj6l\") pod \"placement-22c6-account-create-update-6xb8c\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.693287 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15daf8e2-37c9-4468-85f9-8f47719805c3-operator-scripts\") pod \"placement-22c6-account-create-update-6xb8c\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.693967 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15daf8e2-37c9-4468-85f9-8f47719805c3-operator-scripts\") pod \"placement-22c6-account-create-update-6xb8c\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.714316 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgj6l\" (UniqueName: \"kubernetes.io/projected/15daf8e2-37c9-4468-85f9-8f47719805c3-kube-api-access-jgj6l\") pod \"placement-22c6-account-create-update-6xb8c\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.765972 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.854651 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.903212 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-config\") pod \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.903281 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-dns-svc\") pod \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.903351 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-ovsdbserver-nb\") pod \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.903594 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwb6v\" (UniqueName: \"kubernetes.io/projected/0cc3b30b-780e-4ae6-a86a-41f029101eb8-kube-api-access-hwb6v\") pod \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\" (UID: \"0cc3b30b-780e-4ae6-a86a-41f029101eb8\") " Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.910134 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc3b30b-780e-4ae6-a86a-41f029101eb8-kube-api-access-hwb6v" (OuterVolumeSpecName: "kube-api-access-hwb6v") pod "0cc3b30b-780e-4ae6-a86a-41f029101eb8" (UID: "0cc3b30b-780e-4ae6-a86a-41f029101eb8"). InnerVolumeSpecName "kube-api-access-hwb6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.946836 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0cc3b30b-780e-4ae6-a86a-41f029101eb8" (UID: "0cc3b30b-780e-4ae6-a86a-41f029101eb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.949032 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0cc3b30b-780e-4ae6-a86a-41f029101eb8" (UID: "0cc3b30b-780e-4ae6-a86a-41f029101eb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.958206 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:45 crc kubenswrapper[4903]: I0128 16:04:45.967968 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-config" (OuterVolumeSpecName: "config") pod "0cc3b30b-780e-4ae6-a86a-41f029101eb8" (UID: "0cc3b30b-780e-4ae6-a86a-41f029101eb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.005666 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf4jp\" (UniqueName: \"kubernetes.io/projected/a30ccc7e-ffc7-4072-b872-f243529d9ab5-kube-api-access-rf4jp\") pod \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.005762 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30ccc7e-ffc7-4072-b872-f243529d9ab5-operator-scripts\") pod \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\" (UID: \"a30ccc7e-ffc7-4072-b872-f243529d9ab5\") " Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.006191 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwb6v\" (UniqueName: \"kubernetes.io/projected/0cc3b30b-780e-4ae6-a86a-41f029101eb8-kube-api-access-hwb6v\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.006212 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.006251 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.006263 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cc3b30b-780e-4ae6-a86a-41f029101eb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.006315 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a30ccc7e-ffc7-4072-b872-f243529d9ab5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a30ccc7e-ffc7-4072-b872-f243529d9ab5" (UID: "a30ccc7e-ffc7-4072-b872-f243529d9ab5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.011175 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a30ccc7e-ffc7-4072-b872-f243529d9ab5-kube-api-access-rf4jp" (OuterVolumeSpecName: "kube-api-access-rf4jp") pod "a30ccc7e-ffc7-4072-b872-f243529d9ab5" (UID: "a30ccc7e-ffc7-4072-b872-f243529d9ab5"). InnerVolumeSpecName "kube-api-access-rf4jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.073201 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tbhp2"] Jan 28 16:04:46 crc kubenswrapper[4903]: E0128 16:04:46.073547 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerName="init" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.073564 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerName="init" Jan 28 16:04:46 crc kubenswrapper[4903]: E0128 16:04:46.073590 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a30ccc7e-ffc7-4072-b872-f243529d9ab5" containerName="mariadb-account-create-update" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.073597 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a30ccc7e-ffc7-4072-b872-f243529d9ab5" containerName="mariadb-account-create-update" Jan 28 16:04:46 crc kubenswrapper[4903]: E0128 16:04:46.073612 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerName="dnsmasq-dns" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.073618 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerName="dnsmasq-dns" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.073761 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" containerName="dnsmasq-dns" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.073774 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a30ccc7e-ffc7-4072-b872-f243529d9ab5" containerName="mariadb-account-create-update" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.074269 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.087851 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-g8v94" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.088060 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.095766 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tbhp2"] Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.106901 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pwn\" (UniqueName: \"kubernetes.io/projected/d42c5032-0edb-4f98-b937-d4bc09ad513a-kube-api-access-62pwn\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.107004 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-config-data\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.107101 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-combined-ca-bundle\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.107129 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-db-sync-config-data\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.107181 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a30ccc7e-ffc7-4072-b872-f243529d9ab5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.107198 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf4jp\" (UniqueName: \"kubernetes.io/projected/a30ccc7e-ffc7-4072-b872-f243529d9ab5-kube-api-access-rf4jp\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.110141 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5lmfj"] Jan 28 16:04:46 crc kubenswrapper[4903]: W0128 16:04:46.125022 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c586e0_c175_4b87_9464_b44649a8eb10.slice/crio-cba8564a2a77d36322012b19a990ea0cec319c0763dcfdc25767ae08efd1967c WatchSource:0}: Error finding container cba8564a2a77d36322012b19a990ea0cec319c0763dcfdc25767ae08efd1967c: Status 404 returned error can't find the container with id cba8564a2a77d36322012b19a990ea0cec319c0763dcfdc25767ae08efd1967c Jan 28 16:04:46 crc kubenswrapper[4903]: W0128 16:04:46.201774 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f309ffd_6cba_4804_b3d5_114c4cad07bc.slice/crio-eaf7e22fd0347cad4cb90f1f0c402a36ccf6334a8ceb2cea38c2409c1b713cbc WatchSource:0}: Error finding container eaf7e22fd0347cad4cb90f1f0c402a36ccf6334a8ceb2cea38c2409c1b713cbc: Status 404 returned error can't find the container with id eaf7e22fd0347cad4cb90f1f0c402a36ccf6334a8ceb2cea38c2409c1b713cbc Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.206552 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd05-account-create-update-vs6jd"] Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.208865 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-combined-ca-bundle\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.208907 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-db-sync-config-data\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.208962 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pwn\" (UniqueName: \"kubernetes.io/projected/d42c5032-0edb-4f98-b937-d4bc09ad513a-kube-api-access-62pwn\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.208992 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-config-data\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.214087 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-combined-ca-bundle\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.218219 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-db-sync-config-data\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.218620 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-config-data\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.237394 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pwn\" (UniqueName: \"kubernetes.io/projected/d42c5032-0edb-4f98-b937-d4bc09ad513a-kube-api-access-62pwn\") pod \"glance-db-sync-tbhp2\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.240206 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22c6-account-create-update-6xb8c"] Jan 28 16:04:46 crc kubenswrapper[4903]: W0128 16:04:46.245491 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15daf8e2_37c9_4468_85f9_8f47719805c3.slice/crio-0593a3edbb52349d35a9a5d1cfc36dbceeb7e05cb6c49833e112a82ea8e4c07f WatchSource:0}: Error finding container 0593a3edbb52349d35a9a5d1cfc36dbceeb7e05cb6c49833e112a82ea8e4c07f: Status 404 returned error can't find the container with id 0593a3edbb52349d35a9a5d1cfc36dbceeb7e05cb6c49833e112a82ea8e4c07f Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.306033 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tw4vv"] Jan 28 16:04:46 crc kubenswrapper[4903]: W0128 16:04:46.311627 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c325698_a4a2_4f1b_a865_e37be6610791.slice/crio-b92a2c9259123ddb3022b5aa101a061c65491525b36a94cdf258509ba5ffae23 WatchSource:0}: Error finding container b92a2c9259123ddb3022b5aa101a061c65491525b36a94cdf258509ba5ffae23: Status 404 returned error can't find the container with id b92a2c9259123ddb3022b5aa101a061c65491525b36a94cdf258509ba5ffae23 Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.394300 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tw4vv" event={"ID":"6c325698-a4a2-4f1b-a865-e37be6610791","Type":"ContainerStarted","Data":"b92a2c9259123ddb3022b5aa101a061c65491525b36a94cdf258509ba5ffae23"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.396424 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lmfj" event={"ID":"57c586e0-c175-4b87-9464-b44649a8eb10","Type":"ContainerStarted","Data":"c9737876cfc45ddfa760ab048b1fc0e74d864b2ca9ea30b9db395e230f2a4200"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.396459 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lmfj" event={"ID":"57c586e0-c175-4b87-9464-b44649a8eb10","Type":"ContainerStarted","Data":"cba8564a2a77d36322012b19a990ea0cec319c0763dcfdc25767ae08efd1967c"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.401831 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" event={"ID":"0cc3b30b-780e-4ae6-a86a-41f029101eb8","Type":"ContainerDied","Data":"9b309d23a9a50d20f647414afcb520cf1fb43b0cc82b51db1cb86fd5fb0d94b5"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.401881 4903 scope.go:117] "RemoveContainer" containerID="331cbe01572bd4ac44fc63da32e293dd5851dde4de002ec71038e972a3803775" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.401983 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb874f4c9-w4w49" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.406179 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9tdb" event={"ID":"a30ccc7e-ffc7-4072-b872-f243529d9ab5","Type":"ContainerDied","Data":"1714cf8fc9a42482dd0c5b03b07bd9604aa5335f1518798947b2bff58f2d4bb6"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.406226 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1714cf8fc9a42482dd0c5b03b07bd9604aa5335f1518798947b2bff58f2d4bb6" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.406292 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9tdb" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.414429 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tbhp2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.428460 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5lmfj" podStartSLOduration=1.428441946 podStartE2EDuration="1.428441946s" podCreationTimestamp="2026-01-28 16:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:46.4150199 +0000 UTC m=+1158.690991411" watchObservedRunningTime="2026-01-28 16:04:46.428441946 +0000 UTC m=+1158.704413457" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.429366 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22c6-account-create-update-6xb8c" event={"ID":"15daf8e2-37c9-4468-85f9-8f47719805c3","Type":"ContainerStarted","Data":"0593a3edbb52349d35a9a5d1cfc36dbceeb7e05cb6c49833e112a82ea8e4c07f"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.429411 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd05-account-create-update-vs6jd" event={"ID":"7f309ffd-6cba-4804-b3d5-114c4cad07bc","Type":"ContainerStarted","Data":"72e3d39e506db97742eaf666cdaf176e8b5d1a71197a7c4eac4d6f32e7609458"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.429425 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd05-account-create-update-vs6jd" event={"ID":"7f309ffd-6cba-4804-b3d5-114c4cad07bc","Type":"ContainerStarted","Data":"eaf7e22fd0347cad4cb90f1f0c402a36ccf6334a8ceb2cea38c2409c1b713cbc"} Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.451300 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-fd05-account-create-update-vs6jd" podStartSLOduration=1.451279798 podStartE2EDuration="1.451279798s" podCreationTimestamp="2026-01-28 16:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:04:46.443587498 +0000 UTC m=+1158.719559019" watchObservedRunningTime="2026-01-28 16:04:46.451279798 +0000 UTC m=+1158.727251309" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.453420 4903 scope.go:117] "RemoveContainer" containerID="d945d7e5cbfd3ac32d39d07de6bfb3eb58d85c2b5b800ecfec188ed6dfba9be2" Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.568392 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb874f4c9-w4w49"] Jan 28 16:04:46 crc kubenswrapper[4903]: I0128 16:04:46.599298 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb874f4c9-w4w49"] Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.027338 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tbhp2"] Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.433248 4903 generic.go:334] "Generic (PLEG): container finished" podID="6c325698-a4a2-4f1b-a865-e37be6610791" containerID="93d92bf9f774ba6a91d35fc3a13bb44ccf661c03287e032bf834bdf903270404" exitCode=0 Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.433287 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tw4vv" event={"ID":"6c325698-a4a2-4f1b-a865-e37be6610791","Type":"ContainerDied","Data":"93d92bf9f774ba6a91d35fc3a13bb44ccf661c03287e032bf834bdf903270404"} Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.438381 4903 generic.go:334] "Generic (PLEG): container finished" podID="57c586e0-c175-4b87-9464-b44649a8eb10" containerID="c9737876cfc45ddfa760ab048b1fc0e74d864b2ca9ea30b9db395e230f2a4200" exitCode=0 Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.438455 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lmfj" event={"ID":"57c586e0-c175-4b87-9464-b44649a8eb10","Type":"ContainerDied","Data":"c9737876cfc45ddfa760ab048b1fc0e74d864b2ca9ea30b9db395e230f2a4200"} Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.457903 4903 generic.go:334] "Generic (PLEG): container finished" podID="15daf8e2-37c9-4468-85f9-8f47719805c3" containerID="5d513371bbd71efb4cfca671c88930c921d810f556a12fe47c138227552f0fd8" exitCode=0 Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.457969 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22c6-account-create-update-6xb8c" event={"ID":"15daf8e2-37c9-4468-85f9-8f47719805c3","Type":"ContainerDied","Data":"5d513371bbd71efb4cfca671c88930c921d810f556a12fe47c138227552f0fd8"} Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.460318 4903 generic.go:334] "Generic (PLEG): container finished" podID="7f309ffd-6cba-4804-b3d5-114c4cad07bc" containerID="72e3d39e506db97742eaf666cdaf176e8b5d1a71197a7c4eac4d6f32e7609458" exitCode=0 Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.460382 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd05-account-create-update-vs6jd" event={"ID":"7f309ffd-6cba-4804-b3d5-114c4cad07bc","Type":"ContainerDied","Data":"72e3d39e506db97742eaf666cdaf176e8b5d1a71197a7c4eac4d6f32e7609458"} Jan 28 16:04:47 crc kubenswrapper[4903]: I0128 16:04:47.461408 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tbhp2" event={"ID":"d42c5032-0edb-4f98-b937-d4bc09ad513a","Type":"ContainerStarted","Data":"4d4806a70f3693ab6568ca09529e46f39e94d758119a47cc6acefdc05c955e72"} Jan 28 16:04:48 crc kubenswrapper[4903]: I0128 16:04:48.426649 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cc3b30b-780e-4ae6-a86a-41f029101eb8" path="/var/lib/kubelet/pods/0cc3b30b-780e-4ae6-a86a-41f029101eb8/volumes" Jan 28 16:04:48 crc kubenswrapper[4903]: I0128 16:04:48.834645 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r9tdb"] Jan 28 16:04:48 crc kubenswrapper[4903]: I0128 16:04:48.847970 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r9tdb"] Jan 28 16:04:48 crc kubenswrapper[4903]: I0128 16:04:48.862046 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.002687 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.008216 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.013383 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.068173 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgj6l\" (UniqueName: \"kubernetes.io/projected/15daf8e2-37c9-4468-85f9-8f47719805c3-kube-api-access-jgj6l\") pod \"15daf8e2-37c9-4468-85f9-8f47719805c3\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.068228 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15daf8e2-37c9-4468-85f9-8f47719805c3-operator-scripts\") pod \"15daf8e2-37c9-4468-85f9-8f47719805c3\" (UID: \"15daf8e2-37c9-4468-85f9-8f47719805c3\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.069197 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15daf8e2-37c9-4468-85f9-8f47719805c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15daf8e2-37c9-4468-85f9-8f47719805c3" (UID: "15daf8e2-37c9-4468-85f9-8f47719805c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.083721 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15daf8e2-37c9-4468-85f9-8f47719805c3-kube-api-access-jgj6l" (OuterVolumeSpecName: "kube-api-access-jgj6l") pod "15daf8e2-37c9-4468-85f9-8f47719805c3" (UID: "15daf8e2-37c9-4468-85f9-8f47719805c3"). InnerVolumeSpecName "kube-api-access-jgj6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170032 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c325698-a4a2-4f1b-a865-e37be6610791-operator-scripts\") pod \"6c325698-a4a2-4f1b-a865-e37be6610791\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170138 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c586e0-c175-4b87-9464-b44649a8eb10-operator-scripts\") pod \"57c586e0-c175-4b87-9464-b44649a8eb10\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170195 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w79q7\" (UniqueName: \"kubernetes.io/projected/57c586e0-c175-4b87-9464-b44649a8eb10-kube-api-access-w79q7\") pod \"57c586e0-c175-4b87-9464-b44649a8eb10\" (UID: \"57c586e0-c175-4b87-9464-b44649a8eb10\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170223 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjwhz\" (UniqueName: \"kubernetes.io/projected/6c325698-a4a2-4f1b-a865-e37be6610791-kube-api-access-hjwhz\") pod \"6c325698-a4a2-4f1b-a865-e37be6610791\" (UID: \"6c325698-a4a2-4f1b-a865-e37be6610791\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170354 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f309ffd-6cba-4804-b3d5-114c4cad07bc-operator-scripts\") pod \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170383 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrq7\" (UniqueName: \"kubernetes.io/projected/7f309ffd-6cba-4804-b3d5-114c4cad07bc-kube-api-access-xdrq7\") pod \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\" (UID: \"7f309ffd-6cba-4804-b3d5-114c4cad07bc\") " Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170781 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgj6l\" (UniqueName: \"kubernetes.io/projected/15daf8e2-37c9-4468-85f9-8f47719805c3-kube-api-access-jgj6l\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.170806 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15daf8e2-37c9-4468-85f9-8f47719805c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.171898 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f309ffd-6cba-4804-b3d5-114c4cad07bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f309ffd-6cba-4804-b3d5-114c4cad07bc" (UID: "7f309ffd-6cba-4804-b3d5-114c4cad07bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.171913 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c586e0-c175-4b87-9464-b44649a8eb10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57c586e0-c175-4b87-9464-b44649a8eb10" (UID: "57c586e0-c175-4b87-9464-b44649a8eb10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.172225 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c325698-a4a2-4f1b-a865-e37be6610791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c325698-a4a2-4f1b-a865-e37be6610791" (UID: "6c325698-a4a2-4f1b-a865-e37be6610791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.174850 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c325698-a4a2-4f1b-a865-e37be6610791-kube-api-access-hjwhz" (OuterVolumeSpecName: "kube-api-access-hjwhz") pod "6c325698-a4a2-4f1b-a865-e37be6610791" (UID: "6c325698-a4a2-4f1b-a865-e37be6610791"). InnerVolumeSpecName "kube-api-access-hjwhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.174883 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f309ffd-6cba-4804-b3d5-114c4cad07bc-kube-api-access-xdrq7" (OuterVolumeSpecName: "kube-api-access-xdrq7") pod "7f309ffd-6cba-4804-b3d5-114c4cad07bc" (UID: "7f309ffd-6cba-4804-b3d5-114c4cad07bc"). InnerVolumeSpecName "kube-api-access-xdrq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.175426 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c586e0-c175-4b87-9464-b44649a8eb10-kube-api-access-w79q7" (OuterVolumeSpecName: "kube-api-access-w79q7") pod "57c586e0-c175-4b87-9464-b44649a8eb10" (UID: "57c586e0-c175-4b87-9464-b44649a8eb10"). InnerVolumeSpecName "kube-api-access-w79q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.273122 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c325698-a4a2-4f1b-a865-e37be6610791-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.273168 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c586e0-c175-4b87-9464-b44649a8eb10-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.273181 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w79q7\" (UniqueName: \"kubernetes.io/projected/57c586e0-c175-4b87-9464-b44649a8eb10-kube-api-access-w79q7\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.273196 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjwhz\" (UniqueName: \"kubernetes.io/projected/6c325698-a4a2-4f1b-a865-e37be6610791-kube-api-access-hjwhz\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.273209 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f309ffd-6cba-4804-b3d5-114c4cad07bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.273221 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrq7\" (UniqueName: \"kubernetes.io/projected/7f309ffd-6cba-4804-b3d5-114c4cad07bc-kube-api-access-xdrq7\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.490476 4903 generic.go:334] "Generic (PLEG): container finished" podID="d4dbcd08-6def-4380-8cc4-93a156624deb" containerID="0d1ea7e821d03a7c32ddf82f43f1bb77a4c18f114b371b772fdfa930d4a338f5" exitCode=0 Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.490576 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gxgmt" event={"ID":"d4dbcd08-6def-4380-8cc4-93a156624deb","Type":"ContainerDied","Data":"0d1ea7e821d03a7c32ddf82f43f1bb77a4c18f114b371b772fdfa930d4a338f5"} Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.493614 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tw4vv" event={"ID":"6c325698-a4a2-4f1b-a865-e37be6610791","Type":"ContainerDied","Data":"b92a2c9259123ddb3022b5aa101a061c65491525b36a94cdf258509ba5ffae23"} Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.493691 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b92a2c9259123ddb3022b5aa101a061c65491525b36a94cdf258509ba5ffae23" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.494016 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tw4vv" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.496103 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lmfj" event={"ID":"57c586e0-c175-4b87-9464-b44649a8eb10","Type":"ContainerDied","Data":"cba8564a2a77d36322012b19a990ea0cec319c0763dcfdc25767ae08efd1967c"} Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.496156 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cba8564a2a77d36322012b19a990ea0cec319c0763dcfdc25767ae08efd1967c" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.496361 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lmfj" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.498395 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22c6-account-create-update-6xb8c" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.498403 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22c6-account-create-update-6xb8c" event={"ID":"15daf8e2-37c9-4468-85f9-8f47719805c3","Type":"ContainerDied","Data":"0593a3edbb52349d35a9a5d1cfc36dbceeb7e05cb6c49833e112a82ea8e4c07f"} Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.498447 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0593a3edbb52349d35a9a5d1cfc36dbceeb7e05cb6c49833e112a82ea8e4c07f" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.500887 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fd05-account-create-update-vs6jd" event={"ID":"7f309ffd-6cba-4804-b3d5-114c4cad07bc","Type":"ContainerDied","Data":"eaf7e22fd0347cad4cb90f1f0c402a36ccf6334a8ceb2cea38c2409c1b713cbc"} Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.500915 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaf7e22fd0347cad4cb90f1f0c402a36ccf6334a8ceb2cea38c2409c1b713cbc" Jan 28 16:04:49 crc kubenswrapper[4903]: I0128 16:04:49.501001 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-vs6jd" Jan 28 16:04:50 crc kubenswrapper[4903]: I0128 16:04:50.005391 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 16:04:50 crc kubenswrapper[4903]: I0128 16:04:50.427109 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a30ccc7e-ffc7-4072-b872-f243529d9ab5" path="/var/lib/kubelet/pods/a30ccc7e-ffc7-4072-b872-f243529d9ab5/volumes" Jan 28 16:04:50 crc kubenswrapper[4903]: I0128 16:04:50.814371 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.007663 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-combined-ca-bundle\") pod \"d4dbcd08-6def-4380-8cc4-93a156624deb\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.007819 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-swiftconf\") pod \"d4dbcd08-6def-4380-8cc4-93a156624deb\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.007867 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-dispersionconf\") pod \"d4dbcd08-6def-4380-8cc4-93a156624deb\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.007899 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-ring-data-devices\") pod \"d4dbcd08-6def-4380-8cc4-93a156624deb\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.007945 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d4dbcd08-6def-4380-8cc4-93a156624deb-etc-swift\") pod \"d4dbcd08-6def-4380-8cc4-93a156624deb\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.007973 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-scripts\") pod \"d4dbcd08-6def-4380-8cc4-93a156624deb\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.008015 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx2cl\" (UniqueName: \"kubernetes.io/projected/d4dbcd08-6def-4380-8cc4-93a156624deb-kube-api-access-mx2cl\") pod \"d4dbcd08-6def-4380-8cc4-93a156624deb\" (UID: \"d4dbcd08-6def-4380-8cc4-93a156624deb\") " Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.008396 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d4dbcd08-6def-4380-8cc4-93a156624deb" (UID: "d4dbcd08-6def-4380-8cc4-93a156624deb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.009210 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4dbcd08-6def-4380-8cc4-93a156624deb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d4dbcd08-6def-4380-8cc4-93a156624deb" (UID: "d4dbcd08-6def-4380-8cc4-93a156624deb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.023689 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4dbcd08-6def-4380-8cc4-93a156624deb-kube-api-access-mx2cl" (OuterVolumeSpecName: "kube-api-access-mx2cl") pod "d4dbcd08-6def-4380-8cc4-93a156624deb" (UID: "d4dbcd08-6def-4380-8cc4-93a156624deb"). InnerVolumeSpecName "kube-api-access-mx2cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.027004 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d4dbcd08-6def-4380-8cc4-93a156624deb" (UID: "d4dbcd08-6def-4380-8cc4-93a156624deb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.028406 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d4dbcd08-6def-4380-8cc4-93a156624deb" (UID: "d4dbcd08-6def-4380-8cc4-93a156624deb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.031048 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-scripts" (OuterVolumeSpecName: "scripts") pod "d4dbcd08-6def-4380-8cc4-93a156624deb" (UID: "d4dbcd08-6def-4380-8cc4-93a156624deb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.031362 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4dbcd08-6def-4380-8cc4-93a156624deb" (UID: "d4dbcd08-6def-4380-8cc4-93a156624deb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.109851 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.109898 4903 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.109910 4903 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d4dbcd08-6def-4380-8cc4-93a156624deb-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.109919 4903 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.109926 4903 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d4dbcd08-6def-4380-8cc4-93a156624deb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.109934 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4dbcd08-6def-4380-8cc4-93a156624deb-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.109943 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx2cl\" (UniqueName: \"kubernetes.io/projected/d4dbcd08-6def-4380-8cc4-93a156624deb-kube-api-access-mx2cl\") on node \"crc\" DevicePath \"\"" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.520803 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gxgmt" event={"ID":"d4dbcd08-6def-4380-8cc4-93a156624deb","Type":"ContainerDied","Data":"168dc3ae9f50083ed890b4e0d4172c2474e569f5d06d3c0d1f6945047f3ae97a"} Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.520873 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168dc3ae9f50083ed890b4e0d4172c2474e569f5d06d3c0d1f6945047f3ae97a" Jan 28 16:04:51 crc kubenswrapper[4903]: I0128 16:04:51.520938 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gxgmt" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.827585 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4zx8t"] Jan 28 16:04:53 crc kubenswrapper[4903]: E0128 16:04:53.828412 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c325698-a4a2-4f1b-a865-e37be6610791" containerName="mariadb-database-create" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828435 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c325698-a4a2-4f1b-a865-e37be6610791" containerName="mariadb-database-create" Jan 28 16:04:53 crc kubenswrapper[4903]: E0128 16:04:53.828450 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15daf8e2-37c9-4468-85f9-8f47719805c3" containerName="mariadb-account-create-update" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828458 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="15daf8e2-37c9-4468-85f9-8f47719805c3" containerName="mariadb-account-create-update" Jan 28 16:04:53 crc kubenswrapper[4903]: E0128 16:04:53.828475 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4dbcd08-6def-4380-8cc4-93a156624deb" containerName="swift-ring-rebalance" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828483 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4dbcd08-6def-4380-8cc4-93a156624deb" containerName="swift-ring-rebalance" Jan 28 16:04:53 crc kubenswrapper[4903]: E0128 16:04:53.828511 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f309ffd-6cba-4804-b3d5-114c4cad07bc" containerName="mariadb-account-create-update" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828518 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f309ffd-6cba-4804-b3d5-114c4cad07bc" containerName="mariadb-account-create-update" Jan 28 16:04:53 crc kubenswrapper[4903]: E0128 16:04:53.828549 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c586e0-c175-4b87-9464-b44649a8eb10" containerName="mariadb-database-create" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828559 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c586e0-c175-4b87-9464-b44649a8eb10" containerName="mariadb-database-create" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828824 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c325698-a4a2-4f1b-a865-e37be6610791" containerName="mariadb-database-create" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828840 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c586e0-c175-4b87-9464-b44649a8eb10" containerName="mariadb-database-create" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828850 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f309ffd-6cba-4804-b3d5-114c4cad07bc" containerName="mariadb-account-create-update" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828865 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4dbcd08-6def-4380-8cc4-93a156624deb" containerName="swift-ring-rebalance" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.828879 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="15daf8e2-37c9-4468-85f9-8f47719805c3" containerName="mariadb-account-create-update" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.829567 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.832807 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.837229 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4zx8t"] Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.868284 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-operator-scripts\") pod \"root-account-create-update-4zx8t\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.868385 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2sbp\" (UniqueName: \"kubernetes.io/projected/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-kube-api-access-f2sbp\") pod \"root-account-create-update-4zx8t\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.969902 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-operator-scripts\") pod \"root-account-create-update-4zx8t\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.969969 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2sbp\" (UniqueName: \"kubernetes.io/projected/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-kube-api-access-f2sbp\") pod \"root-account-create-update-4zx8t\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:53 crc kubenswrapper[4903]: I0128 16:04:53.970755 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-operator-scripts\") pod \"root-account-create-update-4zx8t\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:54 crc kubenswrapper[4903]: I0128 16:04:54.030982 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2sbp\" (UniqueName: \"kubernetes.io/projected/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-kube-api-access-f2sbp\") pod \"root-account-create-update-4zx8t\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:54 crc kubenswrapper[4903]: I0128 16:04:54.163493 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zx8t" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.365083 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g8tcr" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerName="ovn-controller" probeResult="failure" output=< Jan 28 16:04:56 crc kubenswrapper[4903]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 16:04:56 crc kubenswrapper[4903]: > Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.410360 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.410977 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.650891 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g8tcr-config-xn2x6"] Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.658196 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.662089 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.693363 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g8tcr-config-xn2x6"] Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.821810 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run-ovn\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.821865 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbnmz\" (UniqueName: \"kubernetes.io/projected/91811d83-2d26-496d-84c1-ce415aa488a6-kube-api-access-jbnmz\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.821917 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.821948 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-scripts\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.822265 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-log-ovn\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.822342 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-additional-scripts\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924042 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run-ovn\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924111 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbnmz\" (UniqueName: \"kubernetes.io/projected/91811d83-2d26-496d-84c1-ce415aa488a6-kube-api-access-jbnmz\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924158 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924196 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-scripts\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924254 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-log-ovn\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924272 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-additional-scripts\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924480 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run-ovn\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924685 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-log-ovn\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.924760 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.925113 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-additional-scripts\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.928308 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-scripts\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:56 crc kubenswrapper[4903]: I0128 16:04:56.945117 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbnmz\" (UniqueName: \"kubernetes.io/projected/91811d83-2d26-496d-84c1-ce415aa488a6-kube-api-access-jbnmz\") pod \"ovn-controller-g8tcr-config-xn2x6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:04:57 crc kubenswrapper[4903]: I0128 16:04:57.003934 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:05:00 crc kubenswrapper[4903]: I0128 16:05:00.022383 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g8tcr-config-xn2x6"] Jan 28 16:05:00 crc kubenswrapper[4903]: W0128 16:05:00.024734 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91811d83_2d26_496d_84c1_ce415aa488a6.slice/crio-97686e9231f23f42b747a62b393424f133677db56a049fd1f2826e6cf2af0d34 WatchSource:0}: Error finding container 97686e9231f23f42b747a62b393424f133677db56a049fd1f2826e6cf2af0d34: Status 404 returned error can't find the container with id 97686e9231f23f42b747a62b393424f133677db56a049fd1f2826e6cf2af0d34 Jan 28 16:05:00 crc kubenswrapper[4903]: I0128 16:05:00.047470 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4zx8t"] Jan 28 16:05:00 crc kubenswrapper[4903]: W0128 16:05:00.053281 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad0f5c51_bd2a_4640_b0e3_a826d45a28d6.slice/crio-3c8c0aaae7cc0bf5492cf122ad38267fc8380cdbe485eab7034134e3dedfb67c WatchSource:0}: Error finding container 3c8c0aaae7cc0bf5492cf122ad38267fc8380cdbe485eab7034134e3dedfb67c: Status 404 returned error can't find the container with id 3c8c0aaae7cc0bf5492cf122ad38267fc8380cdbe485eab7034134e3dedfb67c Jan 28 16:05:00 crc kubenswrapper[4903]: I0128 16:05:00.619570 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g8tcr-config-xn2x6" event={"ID":"91811d83-2d26-496d-84c1-ce415aa488a6","Type":"ContainerStarted","Data":"97686e9231f23f42b747a62b393424f133677db56a049fd1f2826e6cf2af0d34"} Jan 28 16:05:00 crc kubenswrapper[4903]: I0128 16:05:00.621109 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4zx8t" event={"ID":"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6","Type":"ContainerStarted","Data":"3c8c0aaae7cc0bf5492cf122ad38267fc8380cdbe485eab7034134e3dedfb67c"} Jan 28 16:05:00 crc kubenswrapper[4903]: I0128 16:05:00.685232 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:05:00 crc kubenswrapper[4903]: I0128 16:05:00.692026 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"swift-storage-0\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " pod="openstack/swift-storage-0" Jan 28 16:05:00 crc kubenswrapper[4903]: I0128 16:05:00.740974 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 16:05:01 crc kubenswrapper[4903]: I0128 16:05:01.249757 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 16:05:01 crc kubenswrapper[4903]: W0128 16:05:01.255313 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe73c5e_1acc_4125_8ff9_e42b69488039.slice/crio-3765f199c704975ce06bc8b7409ecc4c6569b7c5b0066810a89fd957f5e42637 WatchSource:0}: Error finding container 3765f199c704975ce06bc8b7409ecc4c6569b7c5b0066810a89fd957f5e42637: Status 404 returned error can't find the container with id 3765f199c704975ce06bc8b7409ecc4c6569b7c5b0066810a89fd957f5e42637 Jan 28 16:05:01 crc kubenswrapper[4903]: I0128 16:05:01.358005 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-g8tcr" Jan 28 16:05:01 crc kubenswrapper[4903]: I0128 16:05:01.635299 4903 generic.go:334] "Generic (PLEG): container finished" podID="91811d83-2d26-496d-84c1-ce415aa488a6" containerID="55f8bf6d429541e01c475597f5351c29e93f7bfc9a5aa0340d04790b146db9ba" exitCode=0 Jan 28 16:05:01 crc kubenswrapper[4903]: I0128 16:05:01.635370 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g8tcr-config-xn2x6" event={"ID":"91811d83-2d26-496d-84c1-ce415aa488a6","Type":"ContainerDied","Data":"55f8bf6d429541e01c475597f5351c29e93f7bfc9a5aa0340d04790b146db9ba"} Jan 28 16:05:01 crc kubenswrapper[4903]: I0128 16:05:01.639157 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"3765f199c704975ce06bc8b7409ecc4c6569b7c5b0066810a89fd957f5e42637"} Jan 28 16:05:01 crc kubenswrapper[4903]: I0128 16:05:01.641272 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4zx8t" event={"ID":"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6","Type":"ContainerStarted","Data":"fc9ecb8ba71fe2f33aa423e27d386ee156e01204e113825ea9be4174fda6a516"} Jan 28 16:05:02 crc kubenswrapper[4903]: I0128 16:05:02.296739 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:05:02 crc kubenswrapper[4903]: I0128 16:05:02.719720 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.242309 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.330711 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run-ovn\") pod \"91811d83-2d26-496d-84c1-ce415aa488a6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.330818 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbnmz\" (UniqueName: \"kubernetes.io/projected/91811d83-2d26-496d-84c1-ce415aa488a6-kube-api-access-jbnmz\") pod \"91811d83-2d26-496d-84c1-ce415aa488a6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.330846 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-scripts\") pod \"91811d83-2d26-496d-84c1-ce415aa488a6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.330852 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "91811d83-2d26-496d-84c1-ce415aa488a6" (UID: "91811d83-2d26-496d-84c1-ce415aa488a6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.330879 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run\") pod \"91811d83-2d26-496d-84c1-ce415aa488a6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.330894 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-log-ovn\") pod \"91811d83-2d26-496d-84c1-ce415aa488a6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.330910 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run" (OuterVolumeSpecName: "var-run") pod "91811d83-2d26-496d-84c1-ce415aa488a6" (UID: "91811d83-2d26-496d-84c1-ce415aa488a6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.331015 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "91811d83-2d26-496d-84c1-ce415aa488a6" (UID: "91811d83-2d26-496d-84c1-ce415aa488a6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.331041 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-additional-scripts\") pod \"91811d83-2d26-496d-84c1-ce415aa488a6\" (UID: \"91811d83-2d26-496d-84c1-ce415aa488a6\") " Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.331414 4903 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.331432 4903 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.331443 4903 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91811d83-2d26-496d-84c1-ce415aa488a6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.331804 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "91811d83-2d26-496d-84c1-ce415aa488a6" (UID: "91811d83-2d26-496d-84c1-ce415aa488a6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.331928 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-scripts" (OuterVolumeSpecName: "scripts") pod "91811d83-2d26-496d-84c1-ce415aa488a6" (UID: "91811d83-2d26-496d-84c1-ce415aa488a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.334582 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91811d83-2d26-496d-84c1-ce415aa488a6-kube-api-access-jbnmz" (OuterVolumeSpecName: "kube-api-access-jbnmz") pod "91811d83-2d26-496d-84c1-ce415aa488a6" (UID: "91811d83-2d26-496d-84c1-ce415aa488a6"). InnerVolumeSpecName "kube-api-access-jbnmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.432943 4903 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.432982 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbnmz\" (UniqueName: \"kubernetes.io/projected/91811d83-2d26-496d-84c1-ce415aa488a6-kube-api-access-jbnmz\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.432997 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91811d83-2d26-496d-84c1-ce415aa488a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.657718 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tbhp2" event={"ID":"d42c5032-0edb-4f98-b937-d4bc09ad513a","Type":"ContainerStarted","Data":"7b254ac934a2239d6d4a13a900aec90e10f52506dada4040c9739c1b25c9d748"} Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.660064 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr-config-xn2x6" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.660081 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g8tcr-config-xn2x6" event={"ID":"91811d83-2d26-496d-84c1-ce415aa488a6","Type":"ContainerDied","Data":"97686e9231f23f42b747a62b393424f133677db56a049fd1f2826e6cf2af0d34"} Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.660117 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97686e9231f23f42b747a62b393424f133677db56a049fd1f2826e6cf2af0d34" Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.670172 4903 generic.go:334] "Generic (PLEG): container finished" podID="ad0f5c51-bd2a-4640-b0e3-a826d45a28d6" containerID="fc9ecb8ba71fe2f33aa423e27d386ee156e01204e113825ea9be4174fda6a516" exitCode=0 Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.670214 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4zx8t" event={"ID":"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6","Type":"ContainerDied","Data":"fc9ecb8ba71fe2f33aa423e27d386ee156e01204e113825ea9be4174fda6a516"} Jan 28 16:05:03 crc kubenswrapper[4903]: I0128 16:05:03.692792 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tbhp2" podStartSLOduration=1.675597138 podStartE2EDuration="17.692773663s" podCreationTimestamp="2026-01-28 16:04:46 +0000 UTC" firstStartedPulling="2026-01-28 16:04:47.032172601 +0000 UTC m=+1159.308144112" lastFinishedPulling="2026-01-28 16:05:03.049349126 +0000 UTC m=+1175.325320637" observedRunningTime="2026-01-28 16:05:03.682444992 +0000 UTC m=+1175.958416503" watchObservedRunningTime="2026-01-28 16:05:03.692773663 +0000 UTC m=+1175.968745174" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.487106 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-sp6mn"] Jan 28 16:05:04 crc kubenswrapper[4903]: E0128 16:05:04.487716 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91811d83-2d26-496d-84c1-ce415aa488a6" containerName="ovn-config" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.487730 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="91811d83-2d26-496d-84c1-ce415aa488a6" containerName="ovn-config" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.487900 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="91811d83-2d26-496d-84c1-ce415aa488a6" containerName="ovn-config" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.488405 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.507895 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g8tcr-config-xn2x6"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.516919 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g8tcr-config-xn2x6"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.526720 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-ws2qb"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.527918 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.532855 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sp6mn"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.546824 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ws2qb"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.551403 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jjg6\" (UniqueName: \"kubernetes.io/projected/8b91a6df-a714-4199-b4dc-3b9ecf398074-kube-api-access-5jjg6\") pod \"cinder-db-create-sp6mn\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.551505 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b91a6df-a714-4199-b4dc-3b9ecf398074-operator-scripts\") pod \"cinder-db-create-sp6mn\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.653375 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tt6c\" (UniqueName: \"kubernetes.io/projected/83949796-38e0-4cd4-8358-d2198dd7dfb8-kube-api-access-2tt6c\") pod \"barbican-db-create-ws2qb\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.653459 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b91a6df-a714-4199-b4dc-3b9ecf398074-operator-scripts\") pod \"cinder-db-create-sp6mn\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.653562 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83949796-38e0-4cd4-8358-d2198dd7dfb8-operator-scripts\") pod \"barbican-db-create-ws2qb\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.653592 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jjg6\" (UniqueName: \"kubernetes.io/projected/8b91a6df-a714-4199-b4dc-3b9ecf398074-kube-api-access-5jjg6\") pod \"cinder-db-create-sp6mn\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.654230 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b91a6df-a714-4199-b4dc-3b9ecf398074-operator-scripts\") pod \"cinder-db-create-sp6mn\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.657027 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-74cb-account-create-update-s7vzm"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.657968 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.660228 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.687193 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jjg6\" (UniqueName: \"kubernetes.io/projected/8b91a6df-a714-4199-b4dc-3b9ecf398074-kube-api-access-5jjg6\") pod \"cinder-db-create-sp6mn\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.702240 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-74cb-account-create-update-s7vzm"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.706863 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"8d6925cdba582789ace3400817f99ef5a11fa5573bf42b9183b2310d83669949"} Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.754671 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tt6c\" (UniqueName: \"kubernetes.io/projected/83949796-38e0-4cd4-8358-d2198dd7dfb8-kube-api-access-2tt6c\") pod \"barbican-db-create-ws2qb\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.755106 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqwp\" (UniqueName: \"kubernetes.io/projected/05998e14-d4f9-47d2-b1c7-d563505fa102-kube-api-access-2pqwp\") pod \"barbican-74cb-account-create-update-s7vzm\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.755223 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05998e14-d4f9-47d2-b1c7-d563505fa102-operator-scripts\") pod \"barbican-74cb-account-create-update-s7vzm\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.755277 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83949796-38e0-4cd4-8358-d2198dd7dfb8-operator-scripts\") pod \"barbican-db-create-ws2qb\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.759179 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83949796-38e0-4cd4-8358-d2198dd7dfb8-operator-scripts\") pod \"barbican-db-create-ws2qb\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.771986 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rmt7b"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.773040 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.789128 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tt6c\" (UniqueName: \"kubernetes.io/projected/83949796-38e0-4cd4-8358-d2198dd7dfb8-kube-api-access-2tt6c\") pod \"barbican-db-create-ws2qb\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.804704 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.815601 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-08ac-account-create-update-vmfnk"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.817316 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.821451 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.849796 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rmt7b"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.857477 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05998e14-d4f9-47d2-b1c7-d563505fa102-operator-scripts\") pod \"barbican-74cb-account-create-update-s7vzm\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.857562 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhc75\" (UniqueName: \"kubernetes.io/projected/b22d11dd-8c6a-4114-bb95-d62054670010-kube-api-access-xhc75\") pod \"neutron-db-create-rmt7b\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.857608 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22d11dd-8c6a-4114-bb95-d62054670010-operator-scripts\") pod \"neutron-db-create-rmt7b\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.857668 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqwp\" (UniqueName: \"kubernetes.io/projected/05998e14-d4f9-47d2-b1c7-d563505fa102-kube-api-access-2pqwp\") pod \"barbican-74cb-account-create-update-s7vzm\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.858786 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05998e14-d4f9-47d2-b1c7-d563505fa102-operator-scripts\") pod \"barbican-74cb-account-create-update-s7vzm\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.864147 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.887979 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqwp\" (UniqueName: \"kubernetes.io/projected/05998e14-d4f9-47d2-b1c7-d563505fa102-kube-api-access-2pqwp\") pod \"barbican-74cb-account-create-update-s7vzm\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.897579 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-08ac-account-create-update-vmfnk"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.959212 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhc75\" (UniqueName: \"kubernetes.io/projected/b22d11dd-8c6a-4114-bb95-d62054670010-kube-api-access-xhc75\") pod \"neutron-db-create-rmt7b\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.959257 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22d11dd-8c6a-4114-bb95-d62054670010-operator-scripts\") pod \"neutron-db-create-rmt7b\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.959295 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ff3c2fe-30ce-45ce-938e-9b94c7549522-operator-scripts\") pod \"cinder-08ac-account-create-update-vmfnk\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.959325 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9787d\" (UniqueName: \"kubernetes.io/projected/8ff3c2fe-30ce-45ce-938e-9b94c7549522-kube-api-access-9787d\") pod \"cinder-08ac-account-create-update-vmfnk\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.960508 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22d11dd-8c6a-4114-bb95-d62054670010-operator-scripts\") pod \"neutron-db-create-rmt7b\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.969630 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.977699 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5pnjn"] Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.978746 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.987864 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.990862 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.990934 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.991072 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x79jw" Jan 28 16:05:04 crc kubenswrapper[4903]: I0128 16:05:04.996596 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhc75\" (UniqueName: \"kubernetes.io/projected/b22d11dd-8c6a-4114-bb95-d62054670010-kube-api-access-xhc75\") pod \"neutron-db-create-rmt7b\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.000713 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5pnjn"] Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.032347 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8479-account-create-update-7qbbj"] Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.033917 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.041466 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.060543 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ff3c2fe-30ce-45ce-938e-9b94c7549522-operator-scripts\") pod \"cinder-08ac-account-create-update-vmfnk\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.060755 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9787d\" (UniqueName: \"kubernetes.io/projected/8ff3c2fe-30ce-45ce-938e-9b94c7549522-kube-api-access-9787d\") pod \"cinder-08ac-account-create-update-vmfnk\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.060856 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-combined-ca-bundle\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.060986 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86gq\" (UniqueName: \"kubernetes.io/projected/ab78c773-5297-4a98-8c9a-c80dbc6baf09-kube-api-access-d86gq\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.061067 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-config-data\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.062120 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ff3c2fe-30ce-45ce-938e-9b94c7549522-operator-scripts\") pod \"cinder-08ac-account-create-update-vmfnk\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.084846 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8479-account-create-update-7qbbj"] Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.098313 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9787d\" (UniqueName: \"kubernetes.io/projected/8ff3c2fe-30ce-45ce-938e-9b94c7549522-kube-api-access-9787d\") pod \"cinder-08ac-account-create-update-vmfnk\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.111416 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.131980 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.174435 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b18699-4922-43a6-a149-b0c33642f6dc-operator-scripts\") pod \"neutron-8479-account-create-update-7qbbj\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.174644 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-combined-ca-bundle\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.174980 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86gq\" (UniqueName: \"kubernetes.io/projected/ab78c773-5297-4a98-8c9a-c80dbc6baf09-kube-api-access-d86gq\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.175271 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-config-data\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.175375 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ssc\" (UniqueName: \"kubernetes.io/projected/c1b18699-4922-43a6-a149-b0c33642f6dc-kube-api-access-g2ssc\") pod \"neutron-8479-account-create-update-7qbbj\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.188842 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-combined-ca-bundle\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.200903 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-config-data\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.213071 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86gq\" (UniqueName: \"kubernetes.io/projected/ab78c773-5297-4a98-8c9a-c80dbc6baf09-kube-api-access-d86gq\") pod \"keystone-db-sync-5pnjn\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.254327 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.299710 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2ssc\" (UniqueName: \"kubernetes.io/projected/c1b18699-4922-43a6-a149-b0c33642f6dc-kube-api-access-g2ssc\") pod \"neutron-8479-account-create-update-7qbbj\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.300111 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b18699-4922-43a6-a149-b0c33642f6dc-operator-scripts\") pod \"neutron-8479-account-create-update-7qbbj\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.300857 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b18699-4922-43a6-a149-b0c33642f6dc-operator-scripts\") pod \"neutron-8479-account-create-update-7qbbj\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.337458 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2ssc\" (UniqueName: \"kubernetes.io/projected/c1b18699-4922-43a6-a149-b0c33642f6dc-kube-api-access-g2ssc\") pod \"neutron-8479-account-create-update-7qbbj\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.471697 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zx8t" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.583672 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.606985 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-operator-scripts\") pod \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.607233 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2sbp\" (UniqueName: \"kubernetes.io/projected/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-kube-api-access-f2sbp\") pod \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\" (UID: \"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6\") " Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.609263 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad0f5c51-bd2a-4640-b0e3-a826d45a28d6" (UID: "ad0f5c51-bd2a-4640-b0e3-a826d45a28d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.615070 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-kube-api-access-f2sbp" (OuterVolumeSpecName: "kube-api-access-f2sbp") pod "ad0f5c51-bd2a-4640-b0e3-a826d45a28d6" (UID: "ad0f5c51-bd2a-4640-b0e3-a826d45a28d6"). InnerVolumeSpecName "kube-api-access-f2sbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.697825 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sp6mn"] Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.709174 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2sbp\" (UniqueName: \"kubernetes.io/projected/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-kube-api-access-f2sbp\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.709211 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.714073 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-ws2qb"] Jan 28 16:05:05 crc kubenswrapper[4903]: W0128 16:05:05.730050 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83949796_38e0_4cd4_8358_d2198dd7dfb8.slice/crio-16eea97b0883ac5ea9e3bc3a3e4571f1939e8c2d25bc424d37de79b6046ac5e1 WatchSource:0}: Error finding container 16eea97b0883ac5ea9e3bc3a3e4571f1939e8c2d25bc424d37de79b6046ac5e1: Status 404 returned error can't find the container with id 16eea97b0883ac5ea9e3bc3a3e4571f1939e8c2d25bc424d37de79b6046ac5e1 Jan 28 16:05:05 crc kubenswrapper[4903]: W0128 16:05:05.732973 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b91a6df_a714_4199_b4dc_3b9ecf398074.slice/crio-5802a0ea3a3882a7423f3fad0ff8636ae5f7090606985f91da23f63978082f92 WatchSource:0}: Error finding container 5802a0ea3a3882a7423f3fad0ff8636ae5f7090606985f91da23f63978082f92: Status 404 returned error can't find the container with id 5802a0ea3a3882a7423f3fad0ff8636ae5f7090606985f91da23f63978082f92 Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.734120 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"1902647852c72d50cd7f7eba6e1b998be88fa3e8bce1292d120aa7ad36fcce6a"} Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.734180 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"05bd562da8eff098ad5295672772555c223f358c232a73d480a9a4208fbc2f2e"} Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.746647 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4zx8t" event={"ID":"ad0f5c51-bd2a-4640-b0e3-a826d45a28d6","Type":"ContainerDied","Data":"3c8c0aaae7cc0bf5492cf122ad38267fc8380cdbe485eab7034134e3dedfb67c"} Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.746690 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c8c0aaae7cc0bf5492cf122ad38267fc8380cdbe485eab7034134e3dedfb67c" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.746769 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zx8t" Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.809983 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rmt7b"] Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.820857 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-74cb-account-create-update-s7vzm"] Jan 28 16:05:05 crc kubenswrapper[4903]: W0128 16:05:05.828198 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb22d11dd_8c6a_4114_bb95_d62054670010.slice/crio-ac9f9bc74cb507eae6447e3b01b745bf2b5a456ccefdebffb46c334aa18e5eec WatchSource:0}: Error finding container ac9f9bc74cb507eae6447e3b01b745bf2b5a456ccefdebffb46c334aa18e5eec: Status 404 returned error can't find the container with id ac9f9bc74cb507eae6447e3b01b745bf2b5a456ccefdebffb46c334aa18e5eec Jan 28 16:05:05 crc kubenswrapper[4903]: I0128 16:05:05.832706 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-08ac-account-create-update-vmfnk"] Jan 28 16:05:05 crc kubenswrapper[4903]: W0128 16:05:05.842824 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff3c2fe_30ce_45ce_938e_9b94c7549522.slice/crio-3be9902c72be54d87f6a01cff431c5022fb2359b51d34da0b3483bf9c2ac3a81 WatchSource:0}: Error finding container 3be9902c72be54d87f6a01cff431c5022fb2359b51d34da0b3483bf9c2ac3a81: Status 404 returned error can't find the container with id 3be9902c72be54d87f6a01cff431c5022fb2359b51d34da0b3483bf9c2ac3a81 Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.083859 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5pnjn"] Jan 28 16:05:06 crc kubenswrapper[4903]: W0128 16:05:06.096354 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab78c773_5297_4a98_8c9a_c80dbc6baf09.slice/crio-5a2d141f501377d2c047958bb9291540ae672c74cd2aa0e27eed03f50182e559 WatchSource:0}: Error finding container 5a2d141f501377d2c047958bb9291540ae672c74cd2aa0e27eed03f50182e559: Status 404 returned error can't find the container with id 5a2d141f501377d2c047958bb9291540ae672c74cd2aa0e27eed03f50182e559 Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.191095 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8479-account-create-update-7qbbj"] Jan 28 16:05:06 crc kubenswrapper[4903]: W0128 16:05:06.199018 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1b18699_4922_43a6_a149_b0c33642f6dc.slice/crio-fc1fa22cae6ee21558089c95e6f2a9b71d5b923c9422b201f8474a6f91a70bb6 WatchSource:0}: Error finding container fc1fa22cae6ee21558089c95e6f2a9b71d5b923c9422b201f8474a6f91a70bb6: Status 404 returned error can't find the container with id fc1fa22cae6ee21558089c95e6f2a9b71d5b923c9422b201f8474a6f91a70bb6 Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.423003 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91811d83-2d26-496d-84c1-ce415aa488a6" path="/var/lib/kubelet/pods/91811d83-2d26-496d-84c1-ce415aa488a6/volumes" Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.756204 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74cb-account-create-update-s7vzm" event={"ID":"05998e14-d4f9-47d2-b1c7-d563505fa102","Type":"ContainerStarted","Data":"aa106559349288d080d838447b60274c9745f90e6b33e2f44943504bea86dd3f"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.756254 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74cb-account-create-update-s7vzm" event={"ID":"05998e14-d4f9-47d2-b1c7-d563505fa102","Type":"ContainerStarted","Data":"38e66b73a92c1d3ad87baceca89284333a10dd96c0b018a3683993fec6a6b3fe"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.761919 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"c78ef9751a8dce58d95c9353ff8051a2fbe27f2886b49daeb6742161a84e3b25"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.763774 4903 generic.go:334] "Generic (PLEG): container finished" podID="83949796-38e0-4cd4-8358-d2198dd7dfb8" containerID="f4cf9c1424be6cb1b11b137ac05431dbac51c733f0ebb6bb50c0edf731b0838d" exitCode=0 Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.763909 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ws2qb" event={"ID":"83949796-38e0-4cd4-8358-d2198dd7dfb8","Type":"ContainerDied","Data":"f4cf9c1424be6cb1b11b137ac05431dbac51c733f0ebb6bb50c0edf731b0838d"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.763933 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ws2qb" event={"ID":"83949796-38e0-4cd4-8358-d2198dd7dfb8","Type":"ContainerStarted","Data":"16eea97b0883ac5ea9e3bc3a3e4571f1939e8c2d25bc424d37de79b6046ac5e1"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.767904 4903 generic.go:334] "Generic (PLEG): container finished" podID="b22d11dd-8c6a-4114-bb95-d62054670010" containerID="25961eeabd65eca0650b5d0be864c09befff100fb2eaf5027e16291818437b2f" exitCode=0 Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.767996 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rmt7b" event={"ID":"b22d11dd-8c6a-4114-bb95-d62054670010","Type":"ContainerDied","Data":"25961eeabd65eca0650b5d0be864c09befff100fb2eaf5027e16291818437b2f"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.768025 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rmt7b" event={"ID":"b22d11dd-8c6a-4114-bb95-d62054670010","Type":"ContainerStarted","Data":"ac9f9bc74cb507eae6447e3b01b745bf2b5a456ccefdebffb46c334aa18e5eec"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.772443 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5pnjn" event={"ID":"ab78c773-5297-4a98-8c9a-c80dbc6baf09","Type":"ContainerStarted","Data":"5a2d141f501377d2c047958bb9291540ae672c74cd2aa0e27eed03f50182e559"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.774979 4903 generic.go:334] "Generic (PLEG): container finished" podID="8b91a6df-a714-4199-b4dc-3b9ecf398074" containerID="b28636be26f25d88749455781312c6f8a09daa88d13b8906d341951f0018609b" exitCode=0 Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.775035 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sp6mn" event={"ID":"8b91a6df-a714-4199-b4dc-3b9ecf398074","Type":"ContainerDied","Data":"b28636be26f25d88749455781312c6f8a09daa88d13b8906d341951f0018609b"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.775053 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sp6mn" event={"ID":"8b91a6df-a714-4199-b4dc-3b9ecf398074","Type":"ContainerStarted","Data":"5802a0ea3a3882a7423f3fad0ff8636ae5f7090606985f91da23f63978082f92"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.779641 4903 generic.go:334] "Generic (PLEG): container finished" podID="8ff3c2fe-30ce-45ce-938e-9b94c7549522" containerID="726d659d440d5494927b0d694b4e4cf744221303a1fb4b4596b02e56d758859c" exitCode=0 Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.779720 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-08ac-account-create-update-vmfnk" event={"ID":"8ff3c2fe-30ce-45ce-938e-9b94c7549522","Type":"ContainerDied","Data":"726d659d440d5494927b0d694b4e4cf744221303a1fb4b4596b02e56d758859c"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.779747 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-08ac-account-create-update-vmfnk" event={"ID":"8ff3c2fe-30ce-45ce-938e-9b94c7549522","Type":"ContainerStarted","Data":"3be9902c72be54d87f6a01cff431c5022fb2359b51d34da0b3483bf9c2ac3a81"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.780481 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-74cb-account-create-update-s7vzm" podStartSLOduration=2.7804676600000002 podStartE2EDuration="2.78046766s" podCreationTimestamp="2026-01-28 16:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:06.773769447 +0000 UTC m=+1179.049740958" watchObservedRunningTime="2026-01-28 16:05:06.78046766 +0000 UTC m=+1179.056439171" Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.791100 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8479-account-create-update-7qbbj" event={"ID":"c1b18699-4922-43a6-a149-b0c33642f6dc","Type":"ContainerStarted","Data":"5780ec06d8ceb9a89fcd3d92e75fb12da978f6318a08411b3440fd4a059a15b6"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.791157 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8479-account-create-update-7qbbj" event={"ID":"c1b18699-4922-43a6-a149-b0c33642f6dc","Type":"ContainerStarted","Data":"fc1fa22cae6ee21558089c95e6f2a9b71d5b923c9422b201f8474a6f91a70bb6"} Jan 28 16:05:06 crc kubenswrapper[4903]: I0128 16:05:06.844565 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8479-account-create-update-7qbbj" podStartSLOduration=2.8445191359999997 podStartE2EDuration="2.844519136s" podCreationTimestamp="2026-01-28 16:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:06.839605981 +0000 UTC m=+1179.115577492" watchObservedRunningTime="2026-01-28 16:05:06.844519136 +0000 UTC m=+1179.120490647" Jan 28 16:05:07 crc kubenswrapper[4903]: I0128 16:05:07.803257 4903 generic.go:334] "Generic (PLEG): container finished" podID="c1b18699-4922-43a6-a149-b0c33642f6dc" containerID="5780ec06d8ceb9a89fcd3d92e75fb12da978f6318a08411b3440fd4a059a15b6" exitCode=0 Jan 28 16:05:07 crc kubenswrapper[4903]: I0128 16:05:07.803361 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8479-account-create-update-7qbbj" event={"ID":"c1b18699-4922-43a6-a149-b0c33642f6dc","Type":"ContainerDied","Data":"5780ec06d8ceb9a89fcd3d92e75fb12da978f6318a08411b3440fd4a059a15b6"} Jan 28 16:05:07 crc kubenswrapper[4903]: I0128 16:05:07.804909 4903 generic.go:334] "Generic (PLEG): container finished" podID="05998e14-d4f9-47d2-b1c7-d563505fa102" containerID="aa106559349288d080d838447b60274c9745f90e6b33e2f44943504bea86dd3f" exitCode=0 Jan 28 16:05:07 crc kubenswrapper[4903]: I0128 16:05:07.804947 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74cb-account-create-update-s7vzm" event={"ID":"05998e14-d4f9-47d2-b1c7-d563505fa102","Type":"ContainerDied","Data":"aa106559349288d080d838447b60274c9745f90e6b33e2f44943504bea86dd3f"} Jan 28 16:05:08 crc kubenswrapper[4903]: I0128 16:05:08.815819 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"2077d11c701d11f3d5b9f94bf673c99cd175858ca2ee3f9f5496123712d24aa8"} Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.830711 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rmt7b" event={"ID":"b22d11dd-8c6a-4114-bb95-d62054670010","Type":"ContainerDied","Data":"ac9f9bc74cb507eae6447e3b01b745bf2b5a456ccefdebffb46c334aa18e5eec"} Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.831299 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac9f9bc74cb507eae6447e3b01b745bf2b5a456ccefdebffb46c334aa18e5eec" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.838032 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sp6mn" event={"ID":"8b91a6df-a714-4199-b4dc-3b9ecf398074","Type":"ContainerDied","Data":"5802a0ea3a3882a7423f3fad0ff8636ae5f7090606985f91da23f63978082f92"} Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.838073 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5802a0ea3a3882a7423f3fad0ff8636ae5f7090606985f91da23f63978082f92" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.840843 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-08ac-account-create-update-vmfnk" event={"ID":"8ff3c2fe-30ce-45ce-938e-9b94c7549522","Type":"ContainerDied","Data":"3be9902c72be54d87f6a01cff431c5022fb2359b51d34da0b3483bf9c2ac3a81"} Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.840893 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3be9902c72be54d87f6a01cff431c5022fb2359b51d34da0b3483bf9c2ac3a81" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.842447 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8479-account-create-update-7qbbj" event={"ID":"c1b18699-4922-43a6-a149-b0c33642f6dc","Type":"ContainerDied","Data":"fc1fa22cae6ee21558089c95e6f2a9b71d5b923c9422b201f8474a6f91a70bb6"} Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.842472 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1fa22cae6ee21558089c95e6f2a9b71d5b923c9422b201f8474a6f91a70bb6" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.844154 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-74cb-account-create-update-s7vzm" event={"ID":"05998e14-d4f9-47d2-b1c7-d563505fa102","Type":"ContainerDied","Data":"38e66b73a92c1d3ad87baceca89284333a10dd96c0b018a3683993fec6a6b3fe"} Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.844200 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e66b73a92c1d3ad87baceca89284333a10dd96c0b018a3683993fec6a6b3fe" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.846826 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-ws2qb" event={"ID":"83949796-38e0-4cd4-8358-d2198dd7dfb8","Type":"ContainerDied","Data":"16eea97b0883ac5ea9e3bc3a3e4571f1939e8c2d25bc424d37de79b6046ac5e1"} Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.846844 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16eea97b0883ac5ea9e3bc3a3e4571f1939e8c2d25bc424d37de79b6046ac5e1" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.945200 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.952204 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:10 crc kubenswrapper[4903]: I0128 16:05:10.965677 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.007726 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.011663 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.023657 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119597 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05998e14-d4f9-47d2-b1c7-d563505fa102-operator-scripts\") pod \"05998e14-d4f9-47d2-b1c7-d563505fa102\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119674 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22d11dd-8c6a-4114-bb95-d62054670010-operator-scripts\") pod \"b22d11dd-8c6a-4114-bb95-d62054670010\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119710 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9787d\" (UniqueName: \"kubernetes.io/projected/8ff3c2fe-30ce-45ce-938e-9b94c7549522-kube-api-access-9787d\") pod \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119754 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhc75\" (UniqueName: \"kubernetes.io/projected/b22d11dd-8c6a-4114-bb95-d62054670010-kube-api-access-xhc75\") pod \"b22d11dd-8c6a-4114-bb95-d62054670010\" (UID: \"b22d11dd-8c6a-4114-bb95-d62054670010\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119824 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b18699-4922-43a6-a149-b0c33642f6dc-operator-scripts\") pod \"c1b18699-4922-43a6-a149-b0c33642f6dc\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119871 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pqwp\" (UniqueName: \"kubernetes.io/projected/05998e14-d4f9-47d2-b1c7-d563505fa102-kube-api-access-2pqwp\") pod \"05998e14-d4f9-47d2-b1c7-d563505fa102\" (UID: \"05998e14-d4f9-47d2-b1c7-d563505fa102\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119914 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83949796-38e0-4cd4-8358-d2198dd7dfb8-operator-scripts\") pod \"83949796-38e0-4cd4-8358-d2198dd7dfb8\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119944 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2ssc\" (UniqueName: \"kubernetes.io/projected/c1b18699-4922-43a6-a149-b0c33642f6dc-kube-api-access-g2ssc\") pod \"c1b18699-4922-43a6-a149-b0c33642f6dc\" (UID: \"c1b18699-4922-43a6-a149-b0c33642f6dc\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119965 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tt6c\" (UniqueName: \"kubernetes.io/projected/83949796-38e0-4cd4-8358-d2198dd7dfb8-kube-api-access-2tt6c\") pod \"83949796-38e0-4cd4-8358-d2198dd7dfb8\" (UID: \"83949796-38e0-4cd4-8358-d2198dd7dfb8\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.119987 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jjg6\" (UniqueName: \"kubernetes.io/projected/8b91a6df-a714-4199-b4dc-3b9ecf398074-kube-api-access-5jjg6\") pod \"8b91a6df-a714-4199-b4dc-3b9ecf398074\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.120022 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b91a6df-a714-4199-b4dc-3b9ecf398074-operator-scripts\") pod \"8b91a6df-a714-4199-b4dc-3b9ecf398074\" (UID: \"8b91a6df-a714-4199-b4dc-3b9ecf398074\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.120049 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ff3c2fe-30ce-45ce-938e-9b94c7549522-operator-scripts\") pod \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\" (UID: \"8ff3c2fe-30ce-45ce-938e-9b94c7549522\") " Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.120366 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05998e14-d4f9-47d2-b1c7-d563505fa102-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05998e14-d4f9-47d2-b1c7-d563505fa102" (UID: "05998e14-d4f9-47d2-b1c7-d563505fa102"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.120825 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b91a6df-a714-4199-b4dc-3b9ecf398074-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b91a6df-a714-4199-b4dc-3b9ecf398074" (UID: "8b91a6df-a714-4199-b4dc-3b9ecf398074"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.120831 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83949796-38e0-4cd4-8358-d2198dd7dfb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83949796-38e0-4cd4-8358-d2198dd7dfb8" (UID: "83949796-38e0-4cd4-8358-d2198dd7dfb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.120881 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b22d11dd-8c6a-4114-bb95-d62054670010-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b22d11dd-8c6a-4114-bb95-d62054670010" (UID: "b22d11dd-8c6a-4114-bb95-d62054670010"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.120928 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b18699-4922-43a6-a149-b0c33642f6dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1b18699-4922-43a6-a149-b0c33642f6dc" (UID: "c1b18699-4922-43a6-a149-b0c33642f6dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.121513 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff3c2fe-30ce-45ce-938e-9b94c7549522-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ff3c2fe-30ce-45ce-938e-9b94c7549522" (UID: "8ff3c2fe-30ce-45ce-938e-9b94c7549522"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.121703 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1b18699-4922-43a6-a149-b0c33642f6dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.121739 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83949796-38e0-4cd4-8358-d2198dd7dfb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.121752 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b91a6df-a714-4199-b4dc-3b9ecf398074-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.121762 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ff3c2fe-30ce-45ce-938e-9b94c7549522-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.121771 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05998e14-d4f9-47d2-b1c7-d563505fa102-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.121780 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b22d11dd-8c6a-4114-bb95-d62054670010-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.124915 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22d11dd-8c6a-4114-bb95-d62054670010-kube-api-access-xhc75" (OuterVolumeSpecName: "kube-api-access-xhc75") pod "b22d11dd-8c6a-4114-bb95-d62054670010" (UID: "b22d11dd-8c6a-4114-bb95-d62054670010"). InnerVolumeSpecName "kube-api-access-xhc75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.124961 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff3c2fe-30ce-45ce-938e-9b94c7549522-kube-api-access-9787d" (OuterVolumeSpecName: "kube-api-access-9787d") pod "8ff3c2fe-30ce-45ce-938e-9b94c7549522" (UID: "8ff3c2fe-30ce-45ce-938e-9b94c7549522"). InnerVolumeSpecName "kube-api-access-9787d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.125038 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05998e14-d4f9-47d2-b1c7-d563505fa102-kube-api-access-2pqwp" (OuterVolumeSpecName: "kube-api-access-2pqwp") pod "05998e14-d4f9-47d2-b1c7-d563505fa102" (UID: "05998e14-d4f9-47d2-b1c7-d563505fa102"). InnerVolumeSpecName "kube-api-access-2pqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.125476 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b91a6df-a714-4199-b4dc-3b9ecf398074-kube-api-access-5jjg6" (OuterVolumeSpecName: "kube-api-access-5jjg6") pod "8b91a6df-a714-4199-b4dc-3b9ecf398074" (UID: "8b91a6df-a714-4199-b4dc-3b9ecf398074"). InnerVolumeSpecName "kube-api-access-5jjg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.125621 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83949796-38e0-4cd4-8358-d2198dd7dfb8-kube-api-access-2tt6c" (OuterVolumeSpecName: "kube-api-access-2tt6c") pod "83949796-38e0-4cd4-8358-d2198dd7dfb8" (UID: "83949796-38e0-4cd4-8358-d2198dd7dfb8"). InnerVolumeSpecName "kube-api-access-2tt6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.128969 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b18699-4922-43a6-a149-b0c33642f6dc-kube-api-access-g2ssc" (OuterVolumeSpecName: "kube-api-access-g2ssc") pod "c1b18699-4922-43a6-a149-b0c33642f6dc" (UID: "c1b18699-4922-43a6-a149-b0c33642f6dc"). InnerVolumeSpecName "kube-api-access-g2ssc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.223252 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9787d\" (UniqueName: \"kubernetes.io/projected/8ff3c2fe-30ce-45ce-938e-9b94c7549522-kube-api-access-9787d\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.223301 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhc75\" (UniqueName: \"kubernetes.io/projected/b22d11dd-8c6a-4114-bb95-d62054670010-kube-api-access-xhc75\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.223312 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pqwp\" (UniqueName: \"kubernetes.io/projected/05998e14-d4f9-47d2-b1c7-d563505fa102-kube-api-access-2pqwp\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.223323 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2ssc\" (UniqueName: \"kubernetes.io/projected/c1b18699-4922-43a6-a149-b0c33642f6dc-kube-api-access-g2ssc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.223336 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tt6c\" (UniqueName: \"kubernetes.io/projected/83949796-38e0-4cd4-8358-d2198dd7dfb8-kube-api-access-2tt6c\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.223347 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jjg6\" (UniqueName: \"kubernetes.io/projected/8b91a6df-a714-4199-b4dc-3b9ecf398074-kube-api-access-5jjg6\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860186 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8479-account-create-update-7qbbj" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860186 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-08ac-account-create-update-vmfnk" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860208 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-ws2qb" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860238 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sp6mn" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860235 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"eb7902754910c952a0e047350a7096399669542b9269940b5d03b5d9577fabae"} Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860999 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"9ec33b0218cbf5be31eaa4605b066cecb134d4131c4136762bbbf8bceaed18e9"} Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860290 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rmt7b" Jan 28 16:05:11 crc kubenswrapper[4903]: I0128 16:05:11.860260 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-74cb-account-create-update-s7vzm" Jan 28 16:05:12 crc kubenswrapper[4903]: I0128 16:05:12.869866 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"987273170f201bd99282bf5c33154171012fac1d73596bce885546d8d13a8681"} Jan 28 16:05:12 crc kubenswrapper[4903]: I0128 16:05:12.871382 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5pnjn" event={"ID":"ab78c773-5297-4a98-8c9a-c80dbc6baf09","Type":"ContainerStarted","Data":"99680c3ae0227fe3f1b5f6393451329ff41529f7a08e190be62136a8e1bc203e"} Jan 28 16:05:12 crc kubenswrapper[4903]: I0128 16:05:12.873795 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tbhp2" event={"ID":"d42c5032-0edb-4f98-b937-d4bc09ad513a","Type":"ContainerDied","Data":"7b254ac934a2239d6d4a13a900aec90e10f52506dada4040c9739c1b25c9d748"} Jan 28 16:05:12 crc kubenswrapper[4903]: I0128 16:05:12.873730 4903 generic.go:334] "Generic (PLEG): container finished" podID="d42c5032-0edb-4f98-b937-d4bc09ad513a" containerID="7b254ac934a2239d6d4a13a900aec90e10f52506dada4040c9739c1b25c9d748" exitCode=0 Jan 28 16:05:12 crc kubenswrapper[4903]: I0128 16:05:12.890502 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5pnjn" podStartSLOduration=2.739282249 podStartE2EDuration="8.890483192s" podCreationTimestamp="2026-01-28 16:05:04 +0000 UTC" firstStartedPulling="2026-01-28 16:05:06.100461707 +0000 UTC m=+1178.376433208" lastFinishedPulling="2026-01-28 16:05:12.25166264 +0000 UTC m=+1184.527634151" observedRunningTime="2026-01-28 16:05:12.88710549 +0000 UTC m=+1185.163077001" watchObservedRunningTime="2026-01-28 16:05:12.890483192 +0000 UTC m=+1185.166454703" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.237319 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tbhp2" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.377959 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-db-sync-config-data\") pod \"d42c5032-0edb-4f98-b937-d4bc09ad513a\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.378252 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-config-data\") pod \"d42c5032-0edb-4f98-b937-d4bc09ad513a\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.378426 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-combined-ca-bundle\") pod \"d42c5032-0edb-4f98-b937-d4bc09ad513a\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.378500 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62pwn\" (UniqueName: \"kubernetes.io/projected/d42c5032-0edb-4f98-b937-d4bc09ad513a-kube-api-access-62pwn\") pod \"d42c5032-0edb-4f98-b937-d4bc09ad513a\" (UID: \"d42c5032-0edb-4f98-b937-d4bc09ad513a\") " Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.390774 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42c5032-0edb-4f98-b937-d4bc09ad513a-kube-api-access-62pwn" (OuterVolumeSpecName: "kube-api-access-62pwn") pod "d42c5032-0edb-4f98-b937-d4bc09ad513a" (UID: "d42c5032-0edb-4f98-b937-d4bc09ad513a"). InnerVolumeSpecName "kube-api-access-62pwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.390898 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d42c5032-0edb-4f98-b937-d4bc09ad513a" (UID: "d42c5032-0edb-4f98-b937-d4bc09ad513a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.415686 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d42c5032-0edb-4f98-b937-d4bc09ad513a" (UID: "d42c5032-0edb-4f98-b937-d4bc09ad513a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.454413 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-config-data" (OuterVolumeSpecName: "config-data") pod "d42c5032-0edb-4f98-b937-d4bc09ad513a" (UID: "d42c5032-0edb-4f98-b937-d4bc09ad513a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.482483 4903 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.482790 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.482857 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42c5032-0edb-4f98-b937-d4bc09ad513a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.482920 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62pwn\" (UniqueName: \"kubernetes.io/projected/d42c5032-0edb-4f98-b937-d4bc09ad513a-kube-api-access-62pwn\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.898845 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tbhp2" event={"ID":"d42c5032-0edb-4f98-b937-d4bc09ad513a","Type":"ContainerDied","Data":"4d4806a70f3693ab6568ca09529e46f39e94d758119a47cc6acefdc05c955e72"} Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.899418 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d4806a70f3693ab6568ca09529e46f39e94d758119a47cc6acefdc05c955e72" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.899339 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tbhp2" Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.908450 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"5f7182de515dde6ed72737089f102bb7c64b5bceae2ea9dd0e07b98590e0126b"} Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.908492 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"fddb56423e806702e1b6dee36e7347c017a45be9d08b635bb4e199df0eb3489e"} Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.908502 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"bbcf62a11c97c0772b915ab52c7b8ed5336a2b9f1735f7d74650ddbac7968b3f"} Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.908511 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"eebba63abd410036bd2f597b488df5fd3fc712afc83ddb919fb3f33d78e82010"} Jan 28 16:05:14 crc kubenswrapper[4903]: I0128 16:05:14.908521 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"49fa880f8fb88d223229db177857faa713b2086ac01e656664ea7ecec2ee6237"} Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.283316 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-z2nlh"] Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.283968 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b91a6df-a714-4199-b4dc-3b9ecf398074" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.283981 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b91a6df-a714-4199-b4dc-3b9ecf398074" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.283994 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22d11dd-8c6a-4114-bb95-d62054670010" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284000 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22d11dd-8c6a-4114-bb95-d62054670010" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.284011 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42c5032-0edb-4f98-b937-d4bc09ad513a" containerName="glance-db-sync" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284017 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42c5032-0edb-4f98-b937-d4bc09ad513a" containerName="glance-db-sync" Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.284031 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83949796-38e0-4cd4-8358-d2198dd7dfb8" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284036 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="83949796-38e0-4cd4-8358-d2198dd7dfb8" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.284044 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0f5c51-bd2a-4640-b0e3-a826d45a28d6" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284051 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0f5c51-bd2a-4640-b0e3-a826d45a28d6" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.284058 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05998e14-d4f9-47d2-b1c7-d563505fa102" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284063 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="05998e14-d4f9-47d2-b1c7-d563505fa102" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.284075 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff3c2fe-30ce-45ce-938e-9b94c7549522" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284081 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff3c2fe-30ce-45ce-938e-9b94c7549522" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: E0128 16:05:15.284093 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b18699-4922-43a6-a149-b0c33642f6dc" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284098 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b18699-4922-43a6-a149-b0c33642f6dc" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284255 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff3c2fe-30ce-45ce-938e-9b94c7549522" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284276 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0f5c51-bd2a-4640-b0e3-a826d45a28d6" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284283 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b91a6df-a714-4199-b4dc-3b9ecf398074" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284292 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="83949796-38e0-4cd4-8358-d2198dd7dfb8" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284300 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="05998e14-d4f9-47d2-b1c7-d563505fa102" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284309 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42c5032-0edb-4f98-b937-d4bc09ad513a" containerName="glance-db-sync" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284318 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22d11dd-8c6a-4114-bb95-d62054670010" containerName="mariadb-database-create" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.284327 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b18699-4922-43a6-a149-b0c33642f6dc" containerName="mariadb-account-create-update" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.291146 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.294734 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-z2nlh"] Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.408665 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-config\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.408720 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.408791 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcs5w\" (UniqueName: \"kubernetes.io/projected/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-kube-api-access-hcs5w\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.408816 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.408836 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.509935 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcs5w\" (UniqueName: \"kubernetes.io/projected/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-kube-api-access-hcs5w\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.510006 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.510037 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.510104 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-config\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.510126 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.511491 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-config\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.511804 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-sb\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.511804 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-nb\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.511872 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-dns-svc\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.533545 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcs5w\" (UniqueName: \"kubernetes.io/projected/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-kube-api-access-hcs5w\") pod \"dnsmasq-dns-79778dbd8c-z2nlh\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.648325 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.941057 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"427c2da60bfa90da8ebbfb150ccfb94366c48918a404ebdd1894102608ea88f1"} Jan 28 16:05:15 crc kubenswrapper[4903]: I0128 16:05:15.941424 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerStarted","Data":"fdfe4956af02ae007c08b5307ab6872b8e0595452ba36784decb8edd4b8a5d9b"} Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.033331 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.58810692 podStartE2EDuration="49.033310601s" podCreationTimestamp="2026-01-28 16:04:27 +0000 UTC" firstStartedPulling="2026-01-28 16:05:01.257257752 +0000 UTC m=+1173.533229263" lastFinishedPulling="2026-01-28 16:05:13.702461433 +0000 UTC m=+1185.978432944" observedRunningTime="2026-01-28 16:05:16.026483275 +0000 UTC m=+1188.302454786" watchObservedRunningTime="2026-01-28 16:05:16.033310601 +0000 UTC m=+1188.309282112" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.062513 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-z2nlh"] Jan 28 16:05:16 crc kubenswrapper[4903]: W0128 16:05:16.072670 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c6cbfbf_77ab_485e_9dc6_dc89c7ef76bb.slice/crio-ca2873e0929a936447f968168a288fd9f2dea0baade26546d095d8caa4a58e21 WatchSource:0}: Error finding container ca2873e0929a936447f968168a288fd9f2dea0baade26546d095d8caa4a58e21: Status 404 returned error can't find the container with id ca2873e0929a936447f968168a288fd9f2dea0baade26546d095d8caa4a58e21 Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.364696 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-z2nlh"] Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.382316 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-g7wk4"] Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.385977 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.388845 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.470997 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-g7wk4"] Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.521826 4903 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda30ccc7e-ffc7-4072-b872-f243529d9ab5"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda30ccc7e-ffc7-4072-b872-f243529d9ab5] : Timed out while waiting for systemd to remove kubepods-besteffort-poda30ccc7e_ffc7_4072_b872_f243529d9ab5.slice" Jan 28 16:05:16 crc kubenswrapper[4903]: E0128 16:05:16.521873 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poda30ccc7e-ffc7-4072-b872-f243529d9ab5] : unable to destroy cgroup paths for cgroup [kubepods besteffort poda30ccc7e-ffc7-4072-b872-f243529d9ab5] : Timed out while waiting for systemd to remove kubepods-besteffort-poda30ccc7e_ffc7_4072_b872_f243529d9ab5.slice" pod="openstack/root-account-create-update-r9tdb" podUID="a30ccc7e-ffc7-4072-b872-f243529d9ab5" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.531642 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.531860 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.531907 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.531934 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-config\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.532090 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.532203 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjqh4\" (UniqueName: \"kubernetes.io/projected/d48c7553-2529-4e12-add9-7186f547cf34-kube-api-access-pjqh4\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.633437 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.634584 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.634779 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjqh4\" (UniqueName: \"kubernetes.io/projected/d48c7553-2529-4e12-add9-7186f547cf34-kube-api-access-pjqh4\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.635455 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.636073 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.636406 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.637161 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.637202 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.637238 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-config\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.637765 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.637934 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-config\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.655954 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjqh4\" (UniqueName: \"kubernetes.io/projected/d48c7553-2529-4e12-add9-7186f547cf34-kube-api-access-pjqh4\") pod \"dnsmasq-dns-56c9bc6f5c-g7wk4\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.835971 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.963787 4903 generic.go:334] "Generic (PLEG): container finished" podID="ab78c773-5297-4a98-8c9a-c80dbc6baf09" containerID="99680c3ae0227fe3f1b5f6393451329ff41529f7a08e190be62136a8e1bc203e" exitCode=0 Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.963890 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5pnjn" event={"ID":"ab78c773-5297-4a98-8c9a-c80dbc6baf09","Type":"ContainerDied","Data":"99680c3ae0227fe3f1b5f6393451329ff41529f7a08e190be62136a8e1bc203e"} Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.990011 4903 generic.go:334] "Generic (PLEG): container finished" podID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerID="33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719" exitCode=0 Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.991634 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" event={"ID":"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb","Type":"ContainerDied","Data":"33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719"} Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.991668 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" event={"ID":"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb","Type":"ContainerStarted","Data":"ca2873e0929a936447f968168a288fd9f2dea0baade26546d095d8caa4a58e21"} Jan 28 16:05:16 crc kubenswrapper[4903]: I0128 16:05:16.991705 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9tdb" Jan 28 16:05:17 crc kubenswrapper[4903]: I0128 16:05:17.225781 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-g7wk4"] Jan 28 16:05:17 crc kubenswrapper[4903]: W0128 16:05:17.241505 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd48c7553_2529_4e12_add9_7186f547cf34.slice/crio-10fdd966dfe346ad7f52be1525d7de84c826c6a9e7affe06736614cc30c298de WatchSource:0}: Error finding container 10fdd966dfe346ad7f52be1525d7de84c826c6a9e7affe06736614cc30c298de: Status 404 returned error can't find the container with id 10fdd966dfe346ad7f52be1525d7de84c826c6a9e7affe06736614cc30c298de Jan 28 16:05:17 crc kubenswrapper[4903]: I0128 16:05:17.999519 4903 generic.go:334] "Generic (PLEG): container finished" podID="d48c7553-2529-4e12-add9-7186f547cf34" containerID="88ff76f959567db6f39526feebffb81b78f5255275de9bbc4a80749b058e9db4" exitCode=0 Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:17.999647 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" event={"ID":"d48c7553-2529-4e12-add9-7186f547cf34","Type":"ContainerDied","Data":"88ff76f959567db6f39526feebffb81b78f5255275de9bbc4a80749b058e9db4"} Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:17.999952 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" event={"ID":"d48c7553-2529-4e12-add9-7186f547cf34","Type":"ContainerStarted","Data":"10fdd966dfe346ad7f52be1525d7de84c826c6a9e7affe06736614cc30c298de"} Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.002078 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" event={"ID":"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb","Type":"ContainerStarted","Data":"967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6"} Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.002296 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" podUID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerName="dnsmasq-dns" containerID="cri-o://967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6" gracePeriod=10 Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.002390 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.058663 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" podStartSLOduration=3.058640872 podStartE2EDuration="3.058640872s" podCreationTimestamp="2026-01-28 16:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:18.054290034 +0000 UTC m=+1190.330261545" watchObservedRunningTime="2026-01-28 16:05:18.058640872 +0000 UTC m=+1190.334612393" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.276505 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.373083 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d86gq\" (UniqueName: \"kubernetes.io/projected/ab78c773-5297-4a98-8c9a-c80dbc6baf09-kube-api-access-d86gq\") pod \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.373458 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-combined-ca-bundle\") pod \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.373550 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-config-data\") pod \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\" (UID: \"ab78c773-5297-4a98-8c9a-c80dbc6baf09\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.378362 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab78c773-5297-4a98-8c9a-c80dbc6baf09-kube-api-access-d86gq" (OuterVolumeSpecName: "kube-api-access-d86gq") pod "ab78c773-5297-4a98-8c9a-c80dbc6baf09" (UID: "ab78c773-5297-4a98-8c9a-c80dbc6baf09"). InnerVolumeSpecName "kube-api-access-d86gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.379386 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.411181 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab78c773-5297-4a98-8c9a-c80dbc6baf09" (UID: "ab78c773-5297-4a98-8c9a-c80dbc6baf09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.433378 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-config-data" (OuterVolumeSpecName: "config-data") pod "ab78c773-5297-4a98-8c9a-c80dbc6baf09" (UID: "ab78c773-5297-4a98-8c9a-c80dbc6baf09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.474897 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-sb\") pod \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.475017 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-nb\") pod \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.475046 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-config\") pod \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.475107 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-dns-svc\") pod \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.475203 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcs5w\" (UniqueName: \"kubernetes.io/projected/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-kube-api-access-hcs5w\") pod \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\" (UID: \"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb\") " Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.475758 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d86gq\" (UniqueName: \"kubernetes.io/projected/ab78c773-5297-4a98-8c9a-c80dbc6baf09-kube-api-access-d86gq\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.475785 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.475802 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab78c773-5297-4a98-8c9a-c80dbc6baf09-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.483888 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-kube-api-access-hcs5w" (OuterVolumeSpecName: "kube-api-access-hcs5w") pod "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" (UID: "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb"). InnerVolumeSpecName "kube-api-access-hcs5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.519558 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" (UID: "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.527701 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" (UID: "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.528907 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" (UID: "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.534243 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-config" (OuterVolumeSpecName: "config") pod "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" (UID: "7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.579573 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcs5w\" (UniqueName: \"kubernetes.io/projected/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-kube-api-access-hcs5w\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.579612 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.579624 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.579637 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4903]: I0128 16:05:18.579648 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.026175 4903 generic.go:334] "Generic (PLEG): container finished" podID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerID="967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6" exitCode=0 Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.026242 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" event={"ID":"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb","Type":"ContainerDied","Data":"967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6"} Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.026258 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.026278 4903 scope.go:117] "RemoveContainer" containerID="967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.026267 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79778dbd8c-z2nlh" event={"ID":"7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb","Type":"ContainerDied","Data":"ca2873e0929a936447f968168a288fd9f2dea0baade26546d095d8caa4a58e21"} Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.031656 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5pnjn" event={"ID":"ab78c773-5297-4a98-8c9a-c80dbc6baf09","Type":"ContainerDied","Data":"5a2d141f501377d2c047958bb9291540ae672c74cd2aa0e27eed03f50182e559"} Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.031697 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a2d141f501377d2c047958bb9291540ae672c74cd2aa0e27eed03f50182e559" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.031701 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5pnjn" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.034158 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" event={"ID":"d48c7553-2529-4e12-add9-7186f547cf34","Type":"ContainerStarted","Data":"e0e80f47839bbcb8f5346c467d9fee38bcbb41843ae018369013a7744ee00b5b"} Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.034334 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.058735 4903 scope.go:117] "RemoveContainer" containerID="33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.078507 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" podStartSLOduration=3.07849273 podStartE2EDuration="3.07849273s" podCreationTimestamp="2026-01-28 16:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:19.072578988 +0000 UTC m=+1191.348550499" watchObservedRunningTime="2026-01-28 16:05:19.07849273 +0000 UTC m=+1191.354464241" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.092399 4903 scope.go:117] "RemoveContainer" containerID="967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6" Jan 28 16:05:19 crc kubenswrapper[4903]: E0128 16:05:19.093088 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6\": container with ID starting with 967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6 not found: ID does not exist" containerID="967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.093138 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6"} err="failed to get container status \"967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6\": rpc error: code = NotFound desc = could not find container \"967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6\": container with ID starting with 967c6ab731a24548cc52b72d02b43fc20b54942117f99f718284b67bf04af2d6 not found: ID does not exist" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.093169 4903 scope.go:117] "RemoveContainer" containerID="33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.093921 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-z2nlh"] Jan 28 16:05:19 crc kubenswrapper[4903]: E0128 16:05:19.094209 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719\": container with ID starting with 33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719 not found: ID does not exist" containerID="33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.094309 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719"} err="failed to get container status \"33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719\": rpc error: code = NotFound desc = could not find container \"33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719\": container with ID starting with 33cbeefda845c860da219e342e5bdc6570bca5282f515f3a6a2d4d4b22d3b719 not found: ID does not exist" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.101776 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79778dbd8c-z2nlh"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.213823 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-g7wk4"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.242772 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xgvd7"] Jan 28 16:05:19 crc kubenswrapper[4903]: E0128 16:05:19.243243 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerName="init" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.251020 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerName="init" Jan 28 16:05:19 crc kubenswrapper[4903]: E0128 16:05:19.251162 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerName="dnsmasq-dns" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.251177 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerName="dnsmasq-dns" Jan 28 16:05:19 crc kubenswrapper[4903]: E0128 16:05:19.251191 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab78c773-5297-4a98-8c9a-c80dbc6baf09" containerName="keystone-db-sync" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.251201 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab78c773-5297-4a98-8c9a-c80dbc6baf09" containerName="keystone-db-sync" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.251567 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab78c773-5297-4a98-8c9a-c80dbc6baf09" containerName="keystone-db-sync" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.251611 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" containerName="dnsmasq-dns" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.252780 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.258932 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-krtpx"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.259877 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.261399 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.263018 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.263139 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.266627 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x79jw" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.268962 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.280742 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xgvd7"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.294591 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-krtpx"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.329675 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.329731 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-config\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.329766 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.329810 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.329830 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttnm\" (UniqueName: \"kubernetes.io/projected/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-kube-api-access-mttnm\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.329859 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431570 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-scripts\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431633 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-config-data\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431675 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431700 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-combined-ca-bundle\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431730 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mttnm\" (UniqueName: \"kubernetes.io/projected/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-kube-api-access-mttnm\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431773 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431803 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-credential-keys\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431833 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr5v4\" (UniqueName: \"kubernetes.io/projected/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-kube-api-access-gr5v4\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431878 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-fernet-keys\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431928 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.431959 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-config\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.432000 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.433062 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.433432 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.433710 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.433734 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.433901 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-config\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.460283 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.462441 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.469908 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.470279 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.471430 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mttnm\" (UniqueName: \"kubernetes.io/projected/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-kube-api-access-mttnm\") pod \"dnsmasq-dns-54b4bb76d5-xgvd7\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.504260 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-gj6nt"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.505376 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.509883 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.509932 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.510037 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kzwm2" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.521055 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.533267 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-credential-keys\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.533320 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr5v4\" (UniqueName: \"kubernetes.io/projected/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-kube-api-access-gr5v4\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.533371 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-fernet-keys\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.533469 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-scripts\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.533504 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-config-data\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.533556 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-combined-ca-bundle\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.545685 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-scripts\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.552255 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-combined-ca-bundle\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.552328 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gj6nt"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.554644 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-fernet-keys\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.554812 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-config-data\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.562124 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-credential-keys\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.575331 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.633138 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr5v4\" (UniqueName: \"kubernetes.io/projected/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-kube-api-access-gr5v4\") pod \"keystone-bootstrap-krtpx\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.634658 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.634702 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qg2w\" (UniqueName: \"kubernetes.io/projected/ec81a835-dc41-4420-87e9-8eb5efe75894-kube-api-access-2qg2w\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.634730 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-etc-machine-id\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.634749 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-combined-ca-bundle\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.634768 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-log-httpd\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.634787 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-run-httpd\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.634806 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-db-sync-config-data\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.635033 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.635296 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-config-data\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.635331 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-config-data\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.635348 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-scripts\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.635407 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-scripts\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.635423 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4vrs\" (UniqueName: \"kubernetes.io/projected/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-kube-api-access-p4vrs\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.643811 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-s958x"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.644989 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.652026 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bmnf6" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.652261 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.674697 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-f6twx"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.675952 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.681795 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.681892 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.681962 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c2wqr" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.707129 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xgvd7"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.719718 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s958x"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.728813 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-f6twx"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736732 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-config-data\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736767 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-config-data\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736787 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-scripts\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736803 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-scripts\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736817 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4vrs\" (UniqueName: \"kubernetes.io/projected/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-kube-api-access-p4vrs\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736852 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736885 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-combined-ca-bundle\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736903 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qg2w\" (UniqueName: \"kubernetes.io/projected/ec81a835-dc41-4420-87e9-8eb5efe75894-kube-api-access-2qg2w\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736923 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-etc-machine-id\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736942 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-combined-ca-bundle\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736960 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-log-httpd\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736978 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-run-httpd\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.736998 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-db-sync-config-data\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.737030 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-db-sync-config-data\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.737056 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fr2\" (UniqueName: \"kubernetes.io/projected/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-kube-api-access-w8fr2\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.737096 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.739071 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-run-httpd\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.739357 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-log-httpd\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.739402 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-etc-machine-id\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.741358 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-57gmd"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.742940 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.745299 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-scripts\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.747028 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-scripts\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.749418 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.751677 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-combined-ca-bundle\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.752381 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-config-data\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.754808 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4vrs\" (UniqueName: \"kubernetes.io/projected/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-kube-api-access-p4vrs\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.755183 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.755572 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-db-sync-config-data\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.761837 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8brlz"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.763452 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.770386 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-config-data\") pod \"cinder-db-sync-gj6nt\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.770435 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.770637 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bwvvn" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.770644 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qg2w\" (UniqueName: \"kubernetes.io/projected/ec81a835-dc41-4420-87e9-8eb5efe75894-kube-api-access-2qg2w\") pod \"ceilometer-0\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.770688 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.777738 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-57gmd"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.800030 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8brlz"] Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839652 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839710 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839737 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839765 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pfn8\" (UniqueName: \"kubernetes.io/projected/3f168baf-cfa3-4403-825f-ed1a8e92beca-kube-api-access-9pfn8\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839826 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-combined-ca-bundle\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839861 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-config\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839888 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-combined-ca-bundle\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839910 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxsc\" (UniqueName: \"kubernetes.io/projected/d4df0a14-2dcb-43de-8f3d-26b25f189888-kube-api-access-djxsc\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839936 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-db-sync-config-data\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839963 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4df0a14-2dcb-43de-8f3d-26b25f189888-logs\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.839986 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dn8b\" (UniqueName: \"kubernetes.io/projected/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-kube-api-access-4dn8b\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.840013 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8fr2\" (UniqueName: \"kubernetes.io/projected/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-kube-api-access-w8fr2\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.840041 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-config\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.840336 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-config-data\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.840363 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.840417 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-combined-ca-bundle\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.840448 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-scripts\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.857219 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-combined-ca-bundle\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.859031 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-db-sync-config-data\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.863095 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8fr2\" (UniqueName: \"kubernetes.io/projected/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-kube-api-access-w8fr2\") pod \"barbican-db-sync-s958x\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.889140 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.929122 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942462 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942524 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942563 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942618 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pfn8\" (UniqueName: \"kubernetes.io/projected/3f168baf-cfa3-4403-825f-ed1a8e92beca-kube-api-access-9pfn8\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942704 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-config\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942766 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-combined-ca-bundle\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942797 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxsc\" (UniqueName: \"kubernetes.io/projected/d4df0a14-2dcb-43de-8f3d-26b25f189888-kube-api-access-djxsc\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942830 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4df0a14-2dcb-43de-8f3d-26b25f189888-logs\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942857 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dn8b\" (UniqueName: \"kubernetes.io/projected/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-kube-api-access-4dn8b\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942897 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-config\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942928 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-config-data\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.942955 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.943013 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-combined-ca-bundle\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.943043 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-scripts\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.944165 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-config\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.944508 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.945210 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.945916 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.946931 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4df0a14-2dcb-43de-8f3d-26b25f189888-logs\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.947231 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.947887 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.950286 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-config\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.951675 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-combined-ca-bundle\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.952006 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-config-data\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.961754 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxsc\" (UniqueName: \"kubernetes.io/projected/d4df0a14-2dcb-43de-8f3d-26b25f189888-kube-api-access-djxsc\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.961823 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-scripts\") pod \"placement-db-sync-8brlz\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.963440 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dn8b\" (UniqueName: \"kubernetes.io/projected/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-kube-api-access-4dn8b\") pod \"dnsmasq-dns-5dc4fcdbc-57gmd\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.964380 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-combined-ca-bundle\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.965545 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pfn8\" (UniqueName: \"kubernetes.io/projected/3f168baf-cfa3-4403-825f-ed1a8e92beca-kube-api-access-9pfn8\") pod \"neutron-db-sync-f6twx\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:19 crc kubenswrapper[4903]: I0128 16:05:19.983697 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.006510 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f6twx" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.079655 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.094951 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.141146 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xgvd7"] Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.373962 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.376215 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.377938 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.378418 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.378507 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.378605 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-g8v94" Jan 28 16:05:20 crc kubenswrapper[4903]: I0128 16:05:20.445300 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb" path="/var/lib/kubelet/pods/7c6cbfbf-77ab-485e-9dc6-dc89c7ef76bb/volumes" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.454513 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.456043 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.456539 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.456673 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.456829 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.456870 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp2kq\" (UniqueName: \"kubernetes.io/projected/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-kube-api-access-lp2kq\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.457006 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.457033 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.457162 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.457337 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-logs\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.460086 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.460317 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.470673 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.481449 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.557692 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s958x"] Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558507 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp2kq\" (UniqueName: \"kubernetes.io/projected/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-kube-api-access-lp2kq\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558582 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558609 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-scripts\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558633 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558652 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558670 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kdkw\" (UniqueName: \"kubernetes.io/projected/420b54d5-7b0b-4062-a075-680a74a51c03-kube-api-access-5kdkw\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558703 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558721 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-logs\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558740 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-logs\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558775 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558802 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558824 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558862 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558883 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558905 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-config-data\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.558928 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.565042 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.565296 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-logs\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.565694 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.566682 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.570385 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.577235 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.582627 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: W0128 16:05:20.606658 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e46af77_ec52_4e6c_8f79_9cf6abf6072a.slice/crio-52db3be06bdf6a0701d721c52c667a95e4629bc97f4a61158138fd68ebed53d0 WatchSource:0}: Error finding container 52db3be06bdf6a0701d721c52c667a95e4629bc97f4a61158138fd68ebed53d0: Status 404 returned error can't find the container with id 52db3be06bdf6a0701d721c52c667a95e4629bc97f4a61158138fd68ebed53d0 Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.614572 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.616323 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp2kq\" (UniqueName: \"kubernetes.io/projected/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-kube-api-access-lp2kq\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.663619 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gj6nt"] Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.665016 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666460 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666512 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666552 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666574 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-config-data\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666625 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666649 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-scripts\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666673 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kdkw\" (UniqueName: \"kubernetes.io/projected/420b54d5-7b0b-4062-a075-680a74a51c03-kube-api-access-5kdkw\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.666709 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-logs\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.671821 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.676933 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-logs\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.681984 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.690733 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-config-data\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.693698 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.703138 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-scripts\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.703553 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.710605 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.713553 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kdkw\" (UniqueName: \"kubernetes.io/projected/420b54d5-7b0b-4062-a075-680a74a51c03-kube-api-access-5kdkw\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.745805 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.761066 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-krtpx"] Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:20.777003 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.074189 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s958x" event={"ID":"2ee18582-19e5-4d9a-8fcf-bf69d8efa384","Type":"ContainerStarted","Data":"4d1073d7a8ce68ee97c034628403d71f01e4b7c7fac12aaf651639225a11c572"} Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.076433 4903 generic.go:334] "Generic (PLEG): container finished" podID="65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" containerID="a8a337328500a8ad047538043fd76d27bff1fde2eafd47d9ad9b3a641baa9268" exitCode=0 Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.076488 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" event={"ID":"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c","Type":"ContainerDied","Data":"a8a337328500a8ad047538043fd76d27bff1fde2eafd47d9ad9b3a641baa9268"} Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.076512 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" event={"ID":"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c","Type":"ContainerStarted","Data":"4ab07155cbf9f0548aca74b2d7e720272874b02ed578c5667fafd4c68f0c857f"} Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.089940 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerStarted","Data":"b50706edb6a7c4f4029f07a45f1ebe165f427fd03b51ab028c9c63ef3d18faa6"} Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.090985 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-krtpx" event={"ID":"9e46af77-ec52-4e6c-8f79-9cf6abf6072a","Type":"ContainerStarted","Data":"52db3be06bdf6a0701d721c52c667a95e4629bc97f4a61158138fd68ebed53d0"} Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.111505 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" podUID="d48c7553-2529-4e12-add9-7186f547cf34" containerName="dnsmasq-dns" containerID="cri-o://e0e80f47839bbcb8f5346c467d9fee38bcbb41843ae018369013a7744ee00b5b" gracePeriod=10 Jan 28 16:05:21 crc kubenswrapper[4903]: I0128 16:05:21.111640 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj6nt" event={"ID":"cee91865-9bfc-44d2-a0e3-87a4b309ad7e","Type":"ContainerStarted","Data":"378a6f159e3321f5ae06130476c089aeba60033f97fe01c5aa59b5037a288ea1"} Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.048407 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-57gmd"] Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.063692 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-f6twx"] Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.065320 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.071823 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8brlz"] Jan 28 16:05:22 crc kubenswrapper[4903]: W0128 16:05:22.118234 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ec50878_cd94_43f7_a0ee_750e2f0ffc95.slice/crio-58fa28499b83464ebdad7f18d709f1b6b4ff7f87746828d807f56b70589eeba6 WatchSource:0}: Error finding container 58fa28499b83464ebdad7f18d709f1b6b4ff7f87746828d807f56b70589eeba6: Status 404 returned error can't find the container with id 58fa28499b83464ebdad7f18d709f1b6b4ff7f87746828d807f56b70589eeba6 Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.131824 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" event={"ID":"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c","Type":"ContainerDied","Data":"4ab07155cbf9f0548aca74b2d7e720272874b02ed578c5667fafd4c68f0c857f"} Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.131884 4903 scope.go:117] "RemoveContainer" containerID="a8a337328500a8ad047538043fd76d27bff1fde2eafd47d9ad9b3a641baa9268" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.131998 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.146728 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-krtpx" event={"ID":"9e46af77-ec52-4e6c-8f79-9cf6abf6072a","Type":"ContainerStarted","Data":"51120ba1ed6d76e83c977236b133e7a9a3d15e90becedbbdf05053eb8c96eb2b"} Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.152137 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-swift-storage-0\") pod \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.152231 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-svc\") pod \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.152392 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-config\") pod \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.152647 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-sb\") pod \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.152680 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mttnm\" (UniqueName: \"kubernetes.io/projected/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-kube-api-access-mttnm\") pod \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.152749 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-nb\") pod \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\" (UID: \"65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.166469 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-kube-api-access-mttnm" (OuterVolumeSpecName: "kube-api-access-mttnm") pod "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" (UID: "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c"). InnerVolumeSpecName "kube-api-access-mttnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.193014 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" (UID: "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.195871 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-krtpx" podStartSLOduration=3.195847134 podStartE2EDuration="3.195847134s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:22.176513987 +0000 UTC m=+1194.452485498" watchObservedRunningTime="2026-01-28 16:05:22.195847134 +0000 UTC m=+1194.471818645" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.198609 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" (UID: "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.202049 4903 generic.go:334] "Generic (PLEG): container finished" podID="d48c7553-2529-4e12-add9-7186f547cf34" containerID="e0e80f47839bbcb8f5346c467d9fee38bcbb41843ae018369013a7744ee00b5b" exitCode=0 Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.202096 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" event={"ID":"d48c7553-2529-4e12-add9-7186f547cf34","Type":"ContainerDied","Data":"e0e80f47839bbcb8f5346c467d9fee38bcbb41843ae018369013a7744ee00b5b"} Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.202126 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" event={"ID":"d48c7553-2529-4e12-add9-7186f547cf34","Type":"ContainerDied","Data":"10fdd966dfe346ad7f52be1525d7de84c826c6a9e7affe06736614cc30c298de"} Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.202137 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10fdd966dfe346ad7f52be1525d7de84c826c6a9e7affe06736614cc30c298de" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.209056 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-config" (OuterVolumeSpecName: "config") pod "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" (UID: "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.219283 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" (UID: "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.223901 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" (UID: "65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.255493 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.255523 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.255545 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.255555 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.255563 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.255573 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mttnm\" (UniqueName: \"kubernetes.io/projected/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c-kube-api-access-mttnm\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.330608 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.450978 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.458891 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-svc\") pod \"d48c7553-2529-4e12-add9-7186f547cf34\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.458945 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjqh4\" (UniqueName: \"kubernetes.io/projected/d48c7553-2529-4e12-add9-7186f547cf34-kube-api-access-pjqh4\") pod \"d48c7553-2529-4e12-add9-7186f547cf34\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.458986 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-sb\") pod \"d48c7553-2529-4e12-add9-7186f547cf34\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.459004 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-swift-storage-0\") pod \"d48c7553-2529-4e12-add9-7186f547cf34\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.459081 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-nb\") pod \"d48c7553-2529-4e12-add9-7186f547cf34\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.459102 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-config\") pod \"d48c7553-2529-4e12-add9-7186f547cf34\" (UID: \"d48c7553-2529-4e12-add9-7186f547cf34\") " Jan 28 16:05:22 crc kubenswrapper[4903]: W0128 16:05:22.459707 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7fdbb62_af71_4848_bf06_16ebde1a4c8e.slice/crio-85b0d30ab26e39e3f838f21e6c2b7efdcb77e2da2063795e3b8f53eefd2893d8 WatchSource:0}: Error finding container 85b0d30ab26e39e3f838f21e6c2b7efdcb77e2da2063795e3b8f53eefd2893d8: Status 404 returned error can't find the container with id 85b0d30ab26e39e3f838f21e6c2b7efdcb77e2da2063795e3b8f53eefd2893d8 Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.473459 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48c7553-2529-4e12-add9-7186f547cf34-kube-api-access-pjqh4" (OuterVolumeSpecName: "kube-api-access-pjqh4") pod "d48c7553-2529-4e12-add9-7186f547cf34" (UID: "d48c7553-2529-4e12-add9-7186f547cf34"). InnerVolumeSpecName "kube-api-access-pjqh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.560846 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjqh4\" (UniqueName: \"kubernetes.io/projected/d48c7553-2529-4e12-add9-7186f547cf34-kube-api-access-pjqh4\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.609784 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.697912 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.759257 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:22 crc kubenswrapper[4903]: I0128 16:05:22.891047 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.050490 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d48c7553-2529-4e12-add9-7186f547cf34" (UID: "d48c7553-2529-4e12-add9-7186f547cf34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.074321 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.089396 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d48c7553-2529-4e12-add9-7186f547cf34" (UID: "d48c7553-2529-4e12-add9-7186f547cf34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.104238 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-config" (OuterVolumeSpecName: "config") pod "d48c7553-2529-4e12-add9-7186f547cf34" (UID: "d48c7553-2529-4e12-add9-7186f547cf34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.131992 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d48c7553-2529-4e12-add9-7186f547cf34" (UID: "d48c7553-2529-4e12-add9-7186f547cf34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.153591 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d48c7553-2529-4e12-add9-7186f547cf34" (UID: "d48c7553-2529-4e12-add9-7186f547cf34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.176341 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.176382 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.176392 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.176401 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d48c7553-2529-4e12-add9-7186f547cf34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.218332 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7fdbb62-af71-4848-bf06-16ebde1a4c8e","Type":"ContainerStarted","Data":"85b0d30ab26e39e3f838f21e6c2b7efdcb77e2da2063795e3b8f53eefd2893d8"} Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.232279 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f6twx" event={"ID":"3f168baf-cfa3-4403-825f-ed1a8e92beca","Type":"ContainerStarted","Data":"2f14a8e6570081278a03878a29cb6110720759ffc05ca7173bc560fa7048f1c3"} Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.244204 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8brlz" event={"ID":"d4df0a14-2dcb-43de-8f3d-26b25f189888","Type":"ContainerStarted","Data":"e144961d601d2163004a92f02be7b5f2a5335ba9af044ac7ecff68289abc2ea2"} Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.251495 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"420b54d5-7b0b-4062-a075-680a74a51c03","Type":"ContainerStarted","Data":"fdd2fb2277aa65c87b456b47e59ccf155f8e2043f3fb4141e51765aca44d519a"} Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.260350 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" event={"ID":"9ec50878-cd94-43f7-a0ee-750e2f0ffc95","Type":"ContainerStarted","Data":"58fa28499b83464ebdad7f18d709f1b6b4ff7f87746828d807f56b70589eeba6"} Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.260398 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-g7wk4" Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.322399 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-g7wk4"] Jan 28 16:05:23 crc kubenswrapper[4903]: I0128 16:05:23.333913 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-g7wk4"] Jan 28 16:05:24 crc kubenswrapper[4903]: I0128 16:05:24.287037 4903 generic.go:334] "Generic (PLEG): container finished" podID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerID="b0ba7bf51857c58dce88f9dc8f3151005562f9b649e33a130e03fb6d753bfd31" exitCode=0 Jan 28 16:05:24 crc kubenswrapper[4903]: I0128 16:05:24.287319 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" event={"ID":"9ec50878-cd94-43f7-a0ee-750e2f0ffc95","Type":"ContainerDied","Data":"b0ba7bf51857c58dce88f9dc8f3151005562f9b649e33a130e03fb6d753bfd31"} Jan 28 16:05:24 crc kubenswrapper[4903]: I0128 16:05:24.293500 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7fdbb62-af71-4848-bf06-16ebde1a4c8e","Type":"ContainerStarted","Data":"ca0f456d6ac468372739501aa12e44dfc2ff1431d57f40590e2c0c949740d5e1"} Jan 28 16:05:24 crc kubenswrapper[4903]: I0128 16:05:24.296665 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f6twx" event={"ID":"3f168baf-cfa3-4403-825f-ed1a8e92beca","Type":"ContainerStarted","Data":"c7bf1e8f41ac47e5ad10262b8826c4d9516f64bb9a727ac6db342e3fd3db3370"} Jan 28 16:05:24 crc kubenswrapper[4903]: I0128 16:05:24.350510 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-f6twx" podStartSLOduration=5.35048773 podStartE2EDuration="5.35048773s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:24.325481809 +0000 UTC m=+1196.601453340" watchObservedRunningTime="2026-01-28 16:05:24.35048773 +0000 UTC m=+1196.626459241" Jan 28 16:05:24 crc kubenswrapper[4903]: I0128 16:05:24.428038 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48c7553-2529-4e12-add9-7186f547cf34" path="/var/lib/kubelet/pods/d48c7553-2529-4e12-add9-7186f547cf34/volumes" Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.323981 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7fdbb62-af71-4848-bf06-16ebde1a4c8e","Type":"ContainerStarted","Data":"4f422eded16fbe21a914647d2b2c3955e5772409f0dae49d624854e5e489ced4"} Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.324655 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-log" containerID="cri-o://ca0f456d6ac468372739501aa12e44dfc2ff1431d57f40590e2c0c949740d5e1" gracePeriod=30 Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.324660 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-httpd" containerID="cri-o://4f422eded16fbe21a914647d2b2c3955e5772409f0dae49d624854e5e489ced4" gracePeriod=30 Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.328611 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"420b54d5-7b0b-4062-a075-680a74a51c03","Type":"ContainerStarted","Data":"f3ee9e5cfbbafd5dc56d89c7e74393a28ffee89903a0a3a5fc996e7f437166e1"} Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.328648 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"420b54d5-7b0b-4062-a075-680a74a51c03","Type":"ContainerStarted","Data":"9218080a89992ee3b663ba0a8a93799448851ac87830265472a684a880afd6b0"} Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.335634 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" event={"ID":"9ec50878-cd94-43f7-a0ee-750e2f0ffc95","Type":"ContainerStarted","Data":"630d1568fb7af1b219114384dc4e2056041faa5abd0a851fa1ecc695972d5996"} Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.335742 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.352559 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.352521402 podStartE2EDuration="6.352521402s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:25.349316654 +0000 UTC m=+1197.625288165" watchObservedRunningTime="2026-01-28 16:05:25.352521402 +0000 UTC m=+1197.628492913" Jan 28 16:05:25 crc kubenswrapper[4903]: I0128 16:05:25.371927 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" podStartSLOduration=6.371904269 podStartE2EDuration="6.371904269s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:25.369227187 +0000 UTC m=+1197.645198708" watchObservedRunningTime="2026-01-28 16:05:25.371904269 +0000 UTC m=+1197.647875780" Jan 28 16:05:26 crc kubenswrapper[4903]: I0128 16:05:26.348139 4903 generic.go:334] "Generic (PLEG): container finished" podID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerID="4f422eded16fbe21a914647d2b2c3955e5772409f0dae49d624854e5e489ced4" exitCode=0 Jan 28 16:05:26 crc kubenswrapper[4903]: I0128 16:05:26.348181 4903 generic.go:334] "Generic (PLEG): container finished" podID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerID="ca0f456d6ac468372739501aa12e44dfc2ff1431d57f40590e2c0c949740d5e1" exitCode=143 Jan 28 16:05:26 crc kubenswrapper[4903]: I0128 16:05:26.348220 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7fdbb62-af71-4848-bf06-16ebde1a4c8e","Type":"ContainerDied","Data":"4f422eded16fbe21a914647d2b2c3955e5772409f0dae49d624854e5e489ced4"} Jan 28 16:05:26 crc kubenswrapper[4903]: I0128 16:05:26.348263 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7fdbb62-af71-4848-bf06-16ebde1a4c8e","Type":"ContainerDied","Data":"ca0f456d6ac468372739501aa12e44dfc2ff1431d57f40590e2c0c949740d5e1"} Jan 28 16:05:26 crc kubenswrapper[4903]: I0128 16:05:26.348482 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-log" containerID="cri-o://9218080a89992ee3b663ba0a8a93799448851ac87830265472a684a880afd6b0" gracePeriod=30 Jan 28 16:05:26 crc kubenswrapper[4903]: I0128 16:05:26.348604 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-httpd" containerID="cri-o://f3ee9e5cfbbafd5dc56d89c7e74393a28ffee89903a0a3a5fc996e7f437166e1" gracePeriod=30 Jan 28 16:05:26 crc kubenswrapper[4903]: I0128 16:05:26.381990 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.381968048 podStartE2EDuration="7.381968048s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:26.372592412 +0000 UTC m=+1198.648564153" watchObservedRunningTime="2026-01-28 16:05:26.381968048 +0000 UTC m=+1198.657939559" Jan 28 16:05:27 crc kubenswrapper[4903]: I0128 16:05:27.359426 4903 generic.go:334] "Generic (PLEG): container finished" podID="420b54d5-7b0b-4062-a075-680a74a51c03" containerID="f3ee9e5cfbbafd5dc56d89c7e74393a28ffee89903a0a3a5fc996e7f437166e1" exitCode=0 Jan 28 16:05:27 crc kubenswrapper[4903]: I0128 16:05:27.359944 4903 generic.go:334] "Generic (PLEG): container finished" podID="420b54d5-7b0b-4062-a075-680a74a51c03" containerID="9218080a89992ee3b663ba0a8a93799448851ac87830265472a684a880afd6b0" exitCode=143 Jan 28 16:05:27 crc kubenswrapper[4903]: I0128 16:05:27.359497 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"420b54d5-7b0b-4062-a075-680a74a51c03","Type":"ContainerDied","Data":"f3ee9e5cfbbafd5dc56d89c7e74393a28ffee89903a0a3a5fc996e7f437166e1"} Jan 28 16:05:27 crc kubenswrapper[4903]: I0128 16:05:27.360024 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"420b54d5-7b0b-4062-a075-680a74a51c03","Type":"ContainerDied","Data":"9218080a89992ee3b663ba0a8a93799448851ac87830265472a684a880afd6b0"} Jan 28 16:05:27 crc kubenswrapper[4903]: I0128 16:05:27.361575 4903 generic.go:334] "Generic (PLEG): container finished" podID="9e46af77-ec52-4e6c-8f79-9cf6abf6072a" containerID="51120ba1ed6d76e83c977236b133e7a9a3d15e90becedbbdf05053eb8c96eb2b" exitCode=0 Jan 28 16:05:27 crc kubenswrapper[4903]: I0128 16:05:27.361615 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-krtpx" event={"ID":"9e46af77-ec52-4e6c-8f79-9cf6abf6072a","Type":"ContainerDied","Data":"51120ba1ed6d76e83c977236b133e7a9a3d15e90becedbbdf05053eb8c96eb2b"} Jan 28 16:05:30 crc kubenswrapper[4903]: I0128 16:05:30.080732 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:05:30 crc kubenswrapper[4903]: I0128 16:05:30.148965 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-85fn5"] Jan 28 16:05:30 crc kubenswrapper[4903]: I0128 16:05:30.149252 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="dnsmasq-dns" containerID="cri-o://aa4f7f08c087fdda6c2798cace6400d2c72036cdc1120ab22fce52d20d57f338" gracePeriod=10 Jan 28 16:05:30 crc kubenswrapper[4903]: I0128 16:05:30.401374 4903 generic.go:334] "Generic (PLEG): container finished" podID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerID="aa4f7f08c087fdda6c2798cace6400d2c72036cdc1120ab22fce52d20d57f338" exitCode=0 Jan 28 16:05:30 crc kubenswrapper[4903]: I0128 16:05:30.401414 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" event={"ID":"3eed0863-7a63-42a2-8f91-e98d60e5770f","Type":"ContainerDied","Data":"aa4f7f08c087fdda6c2798cace6400d2c72036cdc1120ab22fce52d20d57f338"} Jan 28 16:05:34 crc kubenswrapper[4903]: I0128 16:05:34.195632 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 28 16:05:35 crc kubenswrapper[4903]: E0128 16:05:35.590276 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 28 16:05:35 crc kubenswrapper[4903]: E0128 16:05:35.590814 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54dh575h594h579h697h5cdh6bh9dh58dh687h5c7h668h585h577h5c7h666h56ch88hchfch67h64fhb9h5f8h649hc8h94h649h7bh646h8ch9dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qg2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ec81a835-dc41-4420-87e9-8eb5efe75894): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.738619 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.832133 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-scripts\") pod \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.832241 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-fernet-keys\") pod \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.832314 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr5v4\" (UniqueName: \"kubernetes.io/projected/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-kube-api-access-gr5v4\") pod \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.832349 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-config-data\") pod \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.832404 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-credential-keys\") pod \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.832584 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-combined-ca-bundle\") pod \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\" (UID: \"9e46af77-ec52-4e6c-8f79-9cf6abf6072a\") " Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.839237 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9e46af77-ec52-4e6c-8f79-9cf6abf6072a" (UID: "9e46af77-ec52-4e6c-8f79-9cf6abf6072a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.840257 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-scripts" (OuterVolumeSpecName: "scripts") pod "9e46af77-ec52-4e6c-8f79-9cf6abf6072a" (UID: "9e46af77-ec52-4e6c-8f79-9cf6abf6072a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.842057 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9e46af77-ec52-4e6c-8f79-9cf6abf6072a" (UID: "9e46af77-ec52-4e6c-8f79-9cf6abf6072a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.842483 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-kube-api-access-gr5v4" (OuterVolumeSpecName: "kube-api-access-gr5v4") pod "9e46af77-ec52-4e6c-8f79-9cf6abf6072a" (UID: "9e46af77-ec52-4e6c-8f79-9cf6abf6072a"). InnerVolumeSpecName "kube-api-access-gr5v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.864232 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e46af77-ec52-4e6c-8f79-9cf6abf6072a" (UID: "9e46af77-ec52-4e6c-8f79-9cf6abf6072a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.872486 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-config-data" (OuterVolumeSpecName: "config-data") pod "9e46af77-ec52-4e6c-8f79-9cf6abf6072a" (UID: "9e46af77-ec52-4e6c-8f79-9cf6abf6072a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.934722 4903 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.934777 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr5v4\" (UniqueName: \"kubernetes.io/projected/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-kube-api-access-gr5v4\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.934794 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.934806 4903 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.934819 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:35 crc kubenswrapper[4903]: I0128 16:05:35.934832 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e46af77-ec52-4e6c-8f79-9cf6abf6072a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.435519 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.436393 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8fr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-s958x_openstack(2ee18582-19e5-4d9a-8fcf-bf69d8efa384): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.437578 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-s958x" podUID="2ee18582-19e5-4d9a-8fcf-bf69d8efa384" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.456383 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-krtpx" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.459143 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-krtpx" event={"ID":"9e46af77-ec52-4e6c-8f79-9cf6abf6072a","Type":"ContainerDied","Data":"52db3be06bdf6a0701d721c52c667a95e4629bc97f4a61158138fd68ebed53d0"} Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.459187 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52db3be06bdf6a0701d721c52c667a95e4629bc97f4a61158138fd68ebed53d0" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.462681 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"420b54d5-7b0b-4062-a075-680a74a51c03","Type":"ContainerDied","Data":"fdd2fb2277aa65c87b456b47e59ccf155f8e2043f3fb4141e51765aca44d519a"} Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.462726 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd2fb2277aa65c87b456b47e59ccf155f8e2043f3fb4141e51765aca44d519a" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.464181 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-s958x" podUID="2ee18582-19e5-4d9a-8fcf-bf69d8efa384" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.473039 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.545798 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.545924 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-logs\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.545960 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-combined-ca-bundle\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.546004 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-scripts\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.546026 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-config-data\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.546114 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-internal-tls-certs\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.546135 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-httpd-run\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.546152 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kdkw\" (UniqueName: \"kubernetes.io/projected/420b54d5-7b0b-4062-a075-680a74a51c03-kube-api-access-5kdkw\") pod \"420b54d5-7b0b-4062-a075-680a74a51c03\" (UID: \"420b54d5-7b0b-4062-a075-680a74a51c03\") " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.549609 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.549786 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-logs" (OuterVolumeSpecName: "logs") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.550924 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.556004 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420b54d5-7b0b-4062-a075-680a74a51c03-kube-api-access-5kdkw" (OuterVolumeSpecName: "kube-api-access-5kdkw") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "kube-api-access-5kdkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.558736 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-scripts" (OuterVolumeSpecName: "scripts") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.578342 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.607724 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.611274 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-config-data" (OuterVolumeSpecName: "config-data") pod "420b54d5-7b0b-4062-a075-680a74a51c03" (UID: "420b54d5-7b0b-4062-a075-680a74a51c03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648473 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648510 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648523 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648553 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648564 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648575 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420b54d5-7b0b-4062-a075-680a74a51c03-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648584 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/420b54d5-7b0b-4062-a075-680a74a51c03-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.648594 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kdkw\" (UniqueName: \"kubernetes.io/projected/420b54d5-7b0b-4062-a075-680a74a51c03-kube-api-access-5kdkw\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.672294 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.750230 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.830506 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-krtpx"] Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.839156 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-krtpx"] Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.919772 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mrfh5"] Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.920193 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48c7553-2529-4e12-add9-7186f547cf34" containerName="init" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920219 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48c7553-2529-4e12-add9-7186f547cf34" containerName="init" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.920233 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-httpd" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920239 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-httpd" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.920250 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e46af77-ec52-4e6c-8f79-9cf6abf6072a" containerName="keystone-bootstrap" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920257 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e46af77-ec52-4e6c-8f79-9cf6abf6072a" containerName="keystone-bootstrap" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.920271 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" containerName="init" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920277 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" containerName="init" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.920289 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-log" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920294 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-log" Jan 28 16:05:36 crc kubenswrapper[4903]: E0128 16:05:36.920305 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48c7553-2529-4e12-add9-7186f547cf34" containerName="dnsmasq-dns" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920311 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48c7553-2529-4e12-add9-7186f547cf34" containerName="dnsmasq-dns" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920465 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-httpd" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920478 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" containerName="glance-log" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920487 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e46af77-ec52-4e6c-8f79-9cf6abf6072a" containerName="keystone-bootstrap" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920493 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" containerName="init" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.920505 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48c7553-2529-4e12-add9-7186f547cf34" containerName="dnsmasq-dns" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.921019 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.924186 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.924359 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x79jw" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.924449 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.924485 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.925642 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.943327 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mrfh5"] Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.955695 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-config-data\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.955777 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm7kq\" (UniqueName: \"kubernetes.io/projected/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-kube-api-access-vm7kq\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.955848 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-combined-ca-bundle\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.955904 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-credential-keys\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.955986 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-scripts\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:36 crc kubenswrapper[4903]: I0128 16:05:36.956018 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-fernet-keys\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.057901 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-config-data\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.057984 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm7kq\" (UniqueName: \"kubernetes.io/projected/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-kube-api-access-vm7kq\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.058071 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-combined-ca-bundle\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.058134 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-credential-keys\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.058223 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-scripts\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.058253 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-fernet-keys\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.062886 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-config-data\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.063499 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-scripts\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.067114 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-credential-keys\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.067839 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-fernet-keys\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.075509 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm7kq\" (UniqueName: \"kubernetes.io/projected/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-kube-api-access-vm7kq\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.088329 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-combined-ca-bundle\") pod \"keystone-bootstrap-mrfh5\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.241945 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.470092 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.505665 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.518929 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.537437 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.539195 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.541337 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.541878 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.548799 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668214 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668299 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668482 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668565 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668615 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668651 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668800 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.668928 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp6xc\" (UniqueName: \"kubernetes.io/projected/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-kube-api-access-sp6xc\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770291 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp6xc\" (UniqueName: \"kubernetes.io/projected/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-kube-api-access-sp6xc\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770353 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770415 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770455 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770476 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770498 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770517 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770554 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.770987 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.771398 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-logs\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.771480 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.776109 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.779458 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.783573 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.788166 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp6xc\" (UniqueName: \"kubernetes.io/projected/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-kube-api-access-sp6xc\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.800122 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.810599 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:05:37 crc kubenswrapper[4903]: I0128 16:05:37.889021 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:38 crc kubenswrapper[4903]: I0128 16:05:38.425343 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420b54d5-7b0b-4062-a075-680a74a51c03" path="/var/lib/kubelet/pods/420b54d5-7b0b-4062-a075-680a74a51c03/volumes" Jan 28 16:05:38 crc kubenswrapper[4903]: I0128 16:05:38.426022 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e46af77-ec52-4e6c-8f79-9cf6abf6072a" path="/var/lib/kubelet/pods/9e46af77-ec52-4e6c-8f79-9cf6abf6072a/volumes" Jan 28 16:05:44 crc kubenswrapper[4903]: I0128 16:05:44.196016 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.420016 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.426902 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530550 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-dns-svc\") pod \"3eed0863-7a63-42a2-8f91-e98d60e5770f\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530620 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-config\") pod \"3eed0863-7a63-42a2-8f91-e98d60e5770f\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530637 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-sb\") pod \"3eed0863-7a63-42a2-8f91-e98d60e5770f\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530670 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-nb\") pod \"3eed0863-7a63-42a2-8f91-e98d60e5770f\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530693 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-public-tls-certs\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530717 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfbms\" (UniqueName: \"kubernetes.io/projected/3eed0863-7a63-42a2-8f91-e98d60e5770f-kube-api-access-xfbms\") pod \"3eed0863-7a63-42a2-8f91-e98d60e5770f\" (UID: \"3eed0863-7a63-42a2-8f91-e98d60e5770f\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530775 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-combined-ca-bundle\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530792 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530806 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-scripts\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530840 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-httpd-run\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530869 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp2kq\" (UniqueName: \"kubernetes.io/projected/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-kube-api-access-lp2kq\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530893 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-config-data\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.530912 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-logs\") pod \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\" (UID: \"f7fdbb62-af71-4848-bf06-16ebde1a4c8e\") " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.533261 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-logs" (OuterVolumeSpecName: "logs") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.533520 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.537353 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.537785 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eed0863-7a63-42a2-8f91-e98d60e5770f-kube-api-access-xfbms" (OuterVolumeSpecName: "kube-api-access-xfbms") pod "3eed0863-7a63-42a2-8f91-e98d60e5770f" (UID: "3eed0863-7a63-42a2-8f91-e98d60e5770f"). InnerVolumeSpecName "kube-api-access-xfbms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.538859 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-scripts" (OuterVolumeSpecName: "scripts") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.539803 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-kube-api-access-lp2kq" (OuterVolumeSpecName: "kube-api-access-lp2kq") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "kube-api-access-lp2kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.550317 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" event={"ID":"3eed0863-7a63-42a2-8f91-e98d60e5770f","Type":"ContainerDied","Data":"1d03a50e873ae30975a3d6c387b26ac15b7ad7042a726774b1c77ede3ec8ae46"} Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.550361 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.550711 4903 scope.go:117] "RemoveContainer" containerID="aa4f7f08c087fdda6c2798cace6400d2c72036cdc1120ab22fce52d20d57f338" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.553553 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7fdbb62-af71-4848-bf06-16ebde1a4c8e","Type":"ContainerDied","Data":"85b0d30ab26e39e3f838f21e6c2b7efdcb77e2da2063795e3b8f53eefd2893d8"} Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.553706 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.578692 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eed0863-7a63-42a2-8f91-e98d60e5770f" (UID: "3eed0863-7a63-42a2-8f91-e98d60e5770f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.580631 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-config" (OuterVolumeSpecName: "config") pod "3eed0863-7a63-42a2-8f91-e98d60e5770f" (UID: "3eed0863-7a63-42a2-8f91-e98d60e5770f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.587805 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.589678 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3eed0863-7a63-42a2-8f91-e98d60e5770f" (UID: "3eed0863-7a63-42a2-8f91-e98d60e5770f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.591601 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-config-data" (OuterVolumeSpecName: "config-data") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.604961 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3eed0863-7a63-42a2-8f91-e98d60e5770f" (UID: "3eed0863-7a63-42a2-8f91-e98d60e5770f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.605058 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7fdbb62-af71-4848-bf06-16ebde1a4c8e" (UID: "f7fdbb62-af71-4848-bf06-16ebde1a4c8e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633354 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633442 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633455 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633466 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633476 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp2kq\" (UniqueName: \"kubernetes.io/projected/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-kube-api-access-lp2kq\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633489 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633500 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633509 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633519 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633541 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633551 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eed0863-7a63-42a2-8f91-e98d60e5770f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633560 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7fdbb62-af71-4848-bf06-16ebde1a4c8e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.633570 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfbms\" (UniqueName: \"kubernetes.io/projected/3eed0863-7a63-42a2-8f91-e98d60e5770f-kube-api-access-xfbms\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.648225 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.735120 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.914404 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-85fn5"] Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.932631 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-85fn5"] Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.943646 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.959602 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.968277 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:45 crc kubenswrapper[4903]: E0128 16:05:45.968707 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="dnsmasq-dns" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.968726 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="dnsmasq-dns" Jan 28 16:05:45 crc kubenswrapper[4903]: E0128 16:05:45.968742 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-log" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.968750 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-log" Jan 28 16:05:45 crc kubenswrapper[4903]: E0128 16:05:45.968775 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="init" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.968783 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="init" Jan 28 16:05:45 crc kubenswrapper[4903]: E0128 16:05:45.968795 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-httpd" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.968803 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-httpd" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.969006 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-httpd" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.969030 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="dnsmasq-dns" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.969051 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" containerName="glance-log" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.970098 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.976372 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.979118 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 16:05:45 crc kubenswrapper[4903]: I0128 16:05:45.979126 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039426 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4mf\" (UniqueName: \"kubernetes.io/projected/c73a1965-ccff-43eb-a317-91ca6e551c4e-kube-api-access-ct4mf\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039479 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039531 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039570 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039594 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039626 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039650 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-logs\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.039768 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141506 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141624 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4mf\" (UniqueName: \"kubernetes.io/projected/c73a1965-ccff-43eb-a317-91ca6e551c4e-kube-api-access-ct4mf\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141655 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141700 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141723 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141747 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141778 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.141800 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-logs\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.142282 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-logs\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.143490 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.143981 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.147010 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-scripts\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.149970 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.150056 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.153765 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-config-data\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.162112 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4mf\" (UniqueName: \"kubernetes.io/projected/c73a1965-ccff-43eb-a317-91ca6e551c4e-kube-api-access-ct4mf\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.175196 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.293505 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.423493 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" path="/var/lib/kubelet/pods/3eed0863-7a63-42a2-8f91-e98d60e5770f/volumes" Jan 28 16:05:46 crc kubenswrapper[4903]: I0128 16:05:46.424305 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fdbb62-af71-4848-bf06-16ebde1a4c8e" path="/var/lib/kubelet/pods/f7fdbb62-af71-4848-bf06-16ebde1a4c8e/volumes" Jan 28 16:05:48 crc kubenswrapper[4903]: E0128 16:05:48.178879 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 28 16:05:48 crc kubenswrapper[4903]: E0128 16:05:48.179484 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4vrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-gj6nt_openstack(cee91865-9bfc-44d2-a0e3-87a4b309ad7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:05:48 crc kubenswrapper[4903]: E0128 16:05:48.180839 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-gj6nt" podUID="cee91865-9bfc-44d2-a0e3-87a4b309ad7e" Jan 28 16:05:48 crc kubenswrapper[4903]: I0128 16:05:48.203972 4903 scope.go:117] "RemoveContainer" containerID="44806975fe6725d16529c77a438148c9ba42fe17b83fd84b083e81954aa8d5ae" Jan 28 16:05:48 crc kubenswrapper[4903]: I0128 16:05:48.557845 4903 scope.go:117] "RemoveContainer" containerID="4f422eded16fbe21a914647d2b2c3955e5772409f0dae49d624854e5e489ced4" Jan 28 16:05:48 crc kubenswrapper[4903]: E0128 16:05:48.598107 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-gj6nt" podUID="cee91865-9bfc-44d2-a0e3-87a4b309ad7e" Jan 28 16:05:48 crc kubenswrapper[4903]: I0128 16:05:48.603109 4903 scope.go:117] "RemoveContainer" containerID="ca0f456d6ac468372739501aa12e44dfc2ff1431d57f40590e2c0c949740d5e1" Jan 28 16:05:48 crc kubenswrapper[4903]: I0128 16:05:48.719004 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:05:48 crc kubenswrapper[4903]: W0128 16:05:48.727076 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40392bf6_fb24_41cb_b61a_2b6d768b3f9b.slice/crio-c52214e41daaffc0ab7e69d23de1a8abffdfc3a943332181afdbe3872807c24a WatchSource:0}: Error finding container c52214e41daaffc0ab7e69d23de1a8abffdfc3a943332181afdbe3872807c24a: Status 404 returned error can't find the container with id c52214e41daaffc0ab7e69d23de1a8abffdfc3a943332181afdbe3872807c24a Jan 28 16:05:48 crc kubenswrapper[4903]: I0128 16:05:48.762878 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mrfh5"] Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.136186 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:05:49 crc kubenswrapper[4903]: W0128 16:05:49.145044 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc73a1965_ccff_43eb_a317_91ca6e551c4e.slice/crio-93ac5e0cd1e8c630f70dc2f6819515e19ac5872172b0dada2894d9cafd7db0b8 WatchSource:0}: Error finding container 93ac5e0cd1e8c630f70dc2f6819515e19ac5872172b0dada2894d9cafd7db0b8: Status 404 returned error can't find the container with id 93ac5e0cd1e8c630f70dc2f6819515e19ac5872172b0dada2894d9cafd7db0b8 Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.196720 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-85fn5" podUID="3eed0863-7a63-42a2-8f91-e98d60e5770f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.600984 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrfh5" event={"ID":"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f","Type":"ContainerStarted","Data":"ce9969752253223f3d742d8c53554034aef2e9373de610bd14d5da8524527791"} Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.601315 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrfh5" event={"ID":"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f","Type":"ContainerStarted","Data":"26fd417246c8826d0b1ddbd93f52da0dbf9260d71a22f6be17ddbd679be2cb0e"} Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.604716 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40392bf6-fb24-41cb-b61a-2b6d768b3f9b","Type":"ContainerStarted","Data":"e60036d9f4a459543f76f778dfe1619ad283a64587191bebb3dcd09d034ce5f1"} Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.604773 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40392bf6-fb24-41cb-b61a-2b6d768b3f9b","Type":"ContainerStarted","Data":"c52214e41daaffc0ab7e69d23de1a8abffdfc3a943332181afdbe3872807c24a"} Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.606951 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8brlz" event={"ID":"d4df0a14-2dcb-43de-8f3d-26b25f189888","Type":"ContainerStarted","Data":"872e24b6cedba9cb408f04bf14e0fe63bb921732f112295d5895a6b7b077fee6"} Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.612504 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerStarted","Data":"7c899a40cb4d581cb29edcc1c065da074f3e19182437333ec6d052e29b059c3e"} Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.625988 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c73a1965-ccff-43eb-a317-91ca6e551c4e","Type":"ContainerStarted","Data":"93ac5e0cd1e8c630f70dc2f6819515e19ac5872172b0dada2894d9cafd7db0b8"} Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.627076 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mrfh5" podStartSLOduration=13.627055284 podStartE2EDuration="13.627055284s" podCreationTimestamp="2026-01-28 16:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:49.620741202 +0000 UTC m=+1221.896712733" watchObservedRunningTime="2026-01-28 16:05:49.627055284 +0000 UTC m=+1221.903026795" Jan 28 16:05:49 crc kubenswrapper[4903]: I0128 16:05:49.640738 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8brlz" podStartSLOduration=4.646560235 podStartE2EDuration="30.640719437s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="2026-01-28 16:05:22.146606822 +0000 UTC m=+1194.422578333" lastFinishedPulling="2026-01-28 16:05:48.140766024 +0000 UTC m=+1220.416737535" observedRunningTime="2026-01-28 16:05:49.639253456 +0000 UTC m=+1221.915224967" watchObservedRunningTime="2026-01-28 16:05:49.640719437 +0000 UTC m=+1221.916690948" Jan 28 16:05:50 crc kubenswrapper[4903]: I0128 16:05:50.636205 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s958x" event={"ID":"2ee18582-19e5-4d9a-8fcf-bf69d8efa384","Type":"ContainerStarted","Data":"5e4efe128a7bf150172b57c3c25cab4bc80693cce0b769bf104d5de605e7d6cd"} Jan 28 16:05:50 crc kubenswrapper[4903]: I0128 16:05:50.641454 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40392bf6-fb24-41cb-b61a-2b6d768b3f9b","Type":"ContainerStarted","Data":"6302c18d46fac1f965887e2e9661489f11c7f3c94dd5110d755905bdd97cf914"} Jan 28 16:05:50 crc kubenswrapper[4903]: I0128 16:05:50.669348 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c73a1965-ccff-43eb-a317-91ca6e551c4e","Type":"ContainerStarted","Data":"f4190431caf39e1cf62f8df34560e8922fa98469cc57b19abf0293f2b23bc912"} Jan 28 16:05:50 crc kubenswrapper[4903]: I0128 16:05:50.669427 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c73a1965-ccff-43eb-a317-91ca6e551c4e","Type":"ContainerStarted","Data":"6d3f8b7a72c94efad6f7723a41564b3b72e80239ea0797b5173fa8f40d6d1376"} Jan 28 16:05:50 crc kubenswrapper[4903]: I0128 16:05:50.683814 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-s958x" podStartSLOduration=2.336799243 podStartE2EDuration="31.683797367s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="2026-01-28 16:05:20.552658069 +0000 UTC m=+1192.828629580" lastFinishedPulling="2026-01-28 16:05:49.899656193 +0000 UTC m=+1222.175627704" observedRunningTime="2026-01-28 16:05:50.681056442 +0000 UTC m=+1222.957027963" watchObservedRunningTime="2026-01-28 16:05:50.683797367 +0000 UTC m=+1222.959768868" Jan 28 16:05:50 crc kubenswrapper[4903]: I0128 16:05:50.710966 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.710943116 podStartE2EDuration="5.710943116s" podCreationTimestamp="2026-01-28 16:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:50.701736645 +0000 UTC m=+1222.977708166" watchObservedRunningTime="2026-01-28 16:05:50.710943116 +0000 UTC m=+1222.986914617" Jan 28 16:05:50 crc kubenswrapper[4903]: I0128 16:05:50.738880 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.738863347 podStartE2EDuration="13.738863347s" podCreationTimestamp="2026-01-28 16:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:50.729833111 +0000 UTC m=+1223.005804622" watchObservedRunningTime="2026-01-28 16:05:50.738863347 +0000 UTC m=+1223.014834858" Jan 28 16:05:51 crc kubenswrapper[4903]: I0128 16:05:51.681486 4903 generic.go:334] "Generic (PLEG): container finished" podID="d4df0a14-2dcb-43de-8f3d-26b25f189888" containerID="872e24b6cedba9cb408f04bf14e0fe63bb921732f112295d5895a6b7b077fee6" exitCode=0 Jan 28 16:05:51 crc kubenswrapper[4903]: I0128 16:05:51.681643 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8brlz" event={"ID":"d4df0a14-2dcb-43de-8f3d-26b25f189888","Type":"ContainerDied","Data":"872e24b6cedba9cb408f04bf14e0fe63bb921732f112295d5895a6b7b077fee6"} Jan 28 16:05:52 crc kubenswrapper[4903]: I0128 16:05:52.457595 4903 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod65e29a8c_99f3_4dac_87c2_aab5c6cd0b7c.slice" Jan 28 16:05:52 crc kubenswrapper[4903]: E0128 16:05:52.457886 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod65e29a8c_99f3_4dac_87c2_aab5c6cd0b7c.slice" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" podUID="65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" Jan 28 16:05:52 crc kubenswrapper[4903]: I0128 16:05:52.693429 4903 generic.go:334] "Generic (PLEG): container finished" podID="9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" containerID="ce9969752253223f3d742d8c53554034aef2e9373de610bd14d5da8524527791" exitCode=0 Jan 28 16:05:52 crc kubenswrapper[4903]: I0128 16:05:52.693508 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-xgvd7" Jan 28 16:05:52 crc kubenswrapper[4903]: I0128 16:05:52.694056 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrfh5" event={"ID":"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f","Type":"ContainerDied","Data":"ce9969752253223f3d742d8c53554034aef2e9373de610bd14d5da8524527791"} Jan 28 16:05:52 crc kubenswrapper[4903]: I0128 16:05:52.780873 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xgvd7"] Jan 28 16:05:52 crc kubenswrapper[4903]: I0128 16:05:52.785981 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-xgvd7"] Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.044758 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.085574 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-config-data\") pod \"d4df0a14-2dcb-43de-8f3d-26b25f189888\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.085722 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxsc\" (UniqueName: \"kubernetes.io/projected/d4df0a14-2dcb-43de-8f3d-26b25f189888-kube-api-access-djxsc\") pod \"d4df0a14-2dcb-43de-8f3d-26b25f189888\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.085857 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4df0a14-2dcb-43de-8f3d-26b25f189888-logs\") pod \"d4df0a14-2dcb-43de-8f3d-26b25f189888\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.085884 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-scripts\") pod \"d4df0a14-2dcb-43de-8f3d-26b25f189888\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.086175 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4df0a14-2dcb-43de-8f3d-26b25f189888-logs" (OuterVolumeSpecName: "logs") pod "d4df0a14-2dcb-43de-8f3d-26b25f189888" (UID: "d4df0a14-2dcb-43de-8f3d-26b25f189888"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.086242 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-combined-ca-bundle\") pod \"d4df0a14-2dcb-43de-8f3d-26b25f189888\" (UID: \"d4df0a14-2dcb-43de-8f3d-26b25f189888\") " Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.087188 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4df0a14-2dcb-43de-8f3d-26b25f189888-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.090685 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-scripts" (OuterVolumeSpecName: "scripts") pod "d4df0a14-2dcb-43de-8f3d-26b25f189888" (UID: "d4df0a14-2dcb-43de-8f3d-26b25f189888"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.092658 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4df0a14-2dcb-43de-8f3d-26b25f189888-kube-api-access-djxsc" (OuterVolumeSpecName: "kube-api-access-djxsc") pod "d4df0a14-2dcb-43de-8f3d-26b25f189888" (UID: "d4df0a14-2dcb-43de-8f3d-26b25f189888"). InnerVolumeSpecName "kube-api-access-djxsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.111779 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4df0a14-2dcb-43de-8f3d-26b25f189888" (UID: "d4df0a14-2dcb-43de-8f3d-26b25f189888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.111865 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-config-data" (OuterVolumeSpecName: "config-data") pod "d4df0a14-2dcb-43de-8f3d-26b25f189888" (UID: "d4df0a14-2dcb-43de-8f3d-26b25f189888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.188966 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxsc\" (UniqueName: \"kubernetes.io/projected/d4df0a14-2dcb-43de-8f3d-26b25f189888-kube-api-access-djxsc\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.189013 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.189028 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.189039 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4df0a14-2dcb-43de-8f3d-26b25f189888-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.704711 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerStarted","Data":"45ed9503932731b51a04f6cf84c64e972433f15f11663c344b453b1f74835228"} Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.706811 4903 generic.go:334] "Generic (PLEG): container finished" podID="2ee18582-19e5-4d9a-8fcf-bf69d8efa384" containerID="5e4efe128a7bf150172b57c3c25cab4bc80693cce0b769bf104d5de605e7d6cd" exitCode=0 Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.706872 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s958x" event={"ID":"2ee18582-19e5-4d9a-8fcf-bf69d8efa384","Type":"ContainerDied","Data":"5e4efe128a7bf150172b57c3c25cab4bc80693cce0b769bf104d5de605e7d6cd"} Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.709960 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8brlz" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.710664 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8brlz" event={"ID":"d4df0a14-2dcb-43de-8f3d-26b25f189888","Type":"ContainerDied","Data":"e144961d601d2163004a92f02be7b5f2a5335ba9af044ac7ecff68289abc2ea2"} Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.710702 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e144961d601d2163004a92f02be7b5f2a5335ba9af044ac7ecff68289abc2ea2" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.936845 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-868d5455d4-797gw"] Jan 28 16:05:53 crc kubenswrapper[4903]: E0128 16:05:53.941574 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4df0a14-2dcb-43de-8f3d-26b25f189888" containerName="placement-db-sync" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.941599 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4df0a14-2dcb-43de-8f3d-26b25f189888" containerName="placement-db-sync" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.941809 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4df0a14-2dcb-43de-8f3d-26b25f189888" containerName="placement-db-sync" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.942881 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.945820 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.948546 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.948930 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.949079 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bwvvn" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.954059 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 16:05:53 crc kubenswrapper[4903]: I0128 16:05:53.960479 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-868d5455d4-797gw"] Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.003189 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrk5\" (UniqueName: \"kubernetes.io/projected/d91d56c5-1ada-417a-8a87-dc4e3960a186-kube-api-access-dcrk5\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.003254 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-public-tls-certs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.003325 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-config-data\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.003395 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-scripts\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.003414 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-internal-tls-certs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.003440 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d91d56c5-1ada-417a-8a87-dc4e3960a186-logs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.003471 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-combined-ca-bundle\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.092682 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.104872 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm7kq\" (UniqueName: \"kubernetes.io/projected/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-kube-api-access-vm7kq\") pod \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.104967 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-credential-keys\") pod \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105041 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-fernet-keys\") pod \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105068 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-config-data\") pod \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105121 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-combined-ca-bundle\") pod \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105188 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-scripts\") pod \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\" (UID: \"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f\") " Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105375 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d91d56c5-1ada-417a-8a87-dc4e3960a186-logs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105421 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-combined-ca-bundle\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105446 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrk5\" (UniqueName: \"kubernetes.io/projected/d91d56c5-1ada-417a-8a87-dc4e3960a186-kube-api-access-dcrk5\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105465 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-public-tls-certs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105522 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-config-data\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105604 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-scripts\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105621 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-internal-tls-certs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.105887 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d91d56c5-1ada-417a-8a87-dc4e3960a186-logs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.112929 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" (UID: "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.113478 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-scripts" (OuterVolumeSpecName: "scripts") pod "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" (UID: "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.113725 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-internal-tls-certs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.114927 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-combined-ca-bundle\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.116580 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-kube-api-access-vm7kq" (OuterVolumeSpecName: "kube-api-access-vm7kq") pod "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" (UID: "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f"). InnerVolumeSpecName "kube-api-access-vm7kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.116996 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-scripts\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.119121 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-config-data\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.120431 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" (UID: "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.122422 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-public-tls-certs\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.127134 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrk5\" (UniqueName: \"kubernetes.io/projected/d91d56c5-1ada-417a-8a87-dc4e3960a186-kube-api-access-dcrk5\") pod \"placement-868d5455d4-797gw\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.135990 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-config-data" (OuterVolumeSpecName: "config-data") pod "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" (UID: "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.144241 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" (UID: "9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.207820 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.207895 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.207913 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.207956 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm7kq\" (UniqueName: \"kubernetes.io/projected/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-kube-api-access-vm7kq\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.207970 4903 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.207981 4903 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.283659 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.461057 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c" path="/var/lib/kubelet/pods/65e29a8c-99f3-4dac-87c2-aab5c6cd0b7c/volumes" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.721969 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mrfh5" event={"ID":"9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f","Type":"ContainerDied","Data":"26fd417246c8826d0b1ddbd93f52da0dbf9260d71a22f6be17ddbd679be2cb0e"} Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.722352 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26fd417246c8826d0b1ddbd93f52da0dbf9260d71a22f6be17ddbd679be2cb0e" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.721994 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mrfh5" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.803806 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-868d5455d4-797gw"] Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.882010 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-55866f486f-t9ft2"] Jan 28 16:05:54 crc kubenswrapper[4903]: E0128 16:05:54.882508 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" containerName="keystone-bootstrap" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.882548 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" containerName="keystone-bootstrap" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.882802 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" containerName="keystone-bootstrap" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.883633 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.890907 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.891324 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.891519 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.891675 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x79jw" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.893012 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.898400 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 16:05:54 crc kubenswrapper[4903]: I0128 16:05:54.900728 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55866f486f-t9ft2"] Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.048806 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-config-data\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.048884 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-credential-keys\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.048924 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-fernet-keys\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.048947 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-internal-tls-certs\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.048999 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94jq\" (UniqueName: \"kubernetes.io/projected/1f6d6643-926c-4d0d-8986-a7c56e748e3f-kube-api-access-d94jq\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.049046 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-combined-ca-bundle\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.049067 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-scripts\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.049095 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-public-tls-certs\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.148827 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150238 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-config-data\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150293 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-credential-keys\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150333 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-fernet-keys\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150354 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-internal-tls-certs\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150400 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d94jq\" (UniqueName: \"kubernetes.io/projected/1f6d6643-926c-4d0d-8986-a7c56e748e3f-kube-api-access-d94jq\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150447 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-combined-ca-bundle\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150477 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-scripts\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.150506 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-public-tls-certs\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.169424 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-credential-keys\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.170000 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-internal-tls-certs\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.170197 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-fernet-keys\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.171615 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-public-tls-certs\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.173031 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-config-data\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.176071 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-combined-ca-bundle\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.177988 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-scripts\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.206152 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d94jq\" (UniqueName: \"kubernetes.io/projected/1f6d6643-926c-4d0d-8986-a7c56e748e3f-kube-api-access-d94jq\") pod \"keystone-55866f486f-t9ft2\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.252255 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-db-sync-config-data\") pod \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.252329 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-combined-ca-bundle\") pod \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.252353 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8fr2\" (UniqueName: \"kubernetes.io/projected/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-kube-api-access-w8fr2\") pod \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\" (UID: \"2ee18582-19e5-4d9a-8fcf-bf69d8efa384\") " Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.256802 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-kube-api-access-w8fr2" (OuterVolumeSpecName: "kube-api-access-w8fr2") pod "2ee18582-19e5-4d9a-8fcf-bf69d8efa384" (UID: "2ee18582-19e5-4d9a-8fcf-bf69d8efa384"). InnerVolumeSpecName "kube-api-access-w8fr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.266595 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ee18582-19e5-4d9a-8fcf-bf69d8efa384" (UID: "2ee18582-19e5-4d9a-8fcf-bf69d8efa384"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.299968 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ee18582-19e5-4d9a-8fcf-bf69d8efa384" (UID: "2ee18582-19e5-4d9a-8fcf-bf69d8efa384"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.337886 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.354203 4903 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.354237 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.354246 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8fr2\" (UniqueName: \"kubernetes.io/projected/2ee18582-19e5-4d9a-8fcf-bf69d8efa384-kube-api-access-w8fr2\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.738298 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s958x" event={"ID":"2ee18582-19e5-4d9a-8fcf-bf69d8efa384","Type":"ContainerDied","Data":"4d1073d7a8ce68ee97c034628403d71f01e4b7c7fac12aaf651639225a11c572"} Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.738697 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d1073d7a8ce68ee97c034628403d71f01e4b7c7fac12aaf651639225a11c572" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.738512 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s958x" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.744103 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-868d5455d4-797gw" event={"ID":"d91d56c5-1ada-417a-8a87-dc4e3960a186","Type":"ContainerStarted","Data":"02a42f37dbf91bc71d23efe4fb6af018b9e853e3b220c2f03760e372b14d5184"} Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.744150 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-868d5455d4-797gw" event={"ID":"d91d56c5-1ada-417a-8a87-dc4e3960a186","Type":"ContainerStarted","Data":"5dd7a851cd619c29827b0ea6cd215ddd77b2818c97ba5045d1ae347a56fe5ca2"} Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.744167 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-868d5455d4-797gw" event={"ID":"d91d56c5-1ada-417a-8a87-dc4e3960a186","Type":"ContainerStarted","Data":"515dc11617073c0c30c93ab9c6e7836446b746f7e723fa0f2ccd8ff82d8c8a57"} Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.747926 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.747992 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.790438 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-868d5455d4-797gw" podStartSLOduration=2.79041327 podStartE2EDuration="2.79041327s" podCreationTimestamp="2026-01-28 16:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:55.772740648 +0000 UTC m=+1228.048712159" watchObservedRunningTime="2026-01-28 16:05:55.79041327 +0000 UTC m=+1228.066384781" Jan 28 16:05:55 crc kubenswrapper[4903]: I0128 16:05:55.840196 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-55866f486f-t9ft2"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.016661 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cd9f7788c-9rhk8"] Jan 28 16:05:56 crc kubenswrapper[4903]: E0128 16:05:56.017102 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee18582-19e5-4d9a-8fcf-bf69d8efa384" containerName="barbican-db-sync" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.017126 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee18582-19e5-4d9a-8fcf-bf69d8efa384" containerName="barbican-db-sync" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.017309 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee18582-19e5-4d9a-8fcf-bf69d8efa384" containerName="barbican-db-sync" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.018298 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.024622 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bmnf6" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.024777 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.024869 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.033819 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cd9f7788c-9rhk8"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.074678 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-698d7dfbbb-d88kl"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.077197 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.080720 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.097388 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4sm8\" (UniqueName: \"kubernetes.io/projected/64646a57-b496-4bf3-8b63-d53321316304-kube-api-access-n4sm8\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.110311 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-combined-ca-bundle\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.110465 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.110592 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data-custom\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.110796 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64646a57-b496-4bf3-8b63-d53321316304-logs\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.122103 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-b6h97"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.127394 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.159504 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-698d7dfbbb-d88kl"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.183627 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-b6h97"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.212786 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-combined-ca-bundle\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.212844 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-svc\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.212919 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.212939 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.212974 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data-custom\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213104 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5gp\" (UniqueName: \"kubernetes.io/projected/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-kube-api-access-7r5gp\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213148 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-logs\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213179 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64646a57-b496-4bf3-8b63-d53321316304-logs\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213256 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213349 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4sm8\" (UniqueName: \"kubernetes.io/projected/64646a57-b496-4bf3-8b63-d53321316304-kube-api-access-n4sm8\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213383 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqsp\" (UniqueName: \"kubernetes.io/projected/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-kube-api-access-qpqsp\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213418 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213434 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-config\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213488 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data-custom\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213647 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.213680 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-combined-ca-bundle\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.214700 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64646a57-b496-4bf3-8b63-d53321316304-logs\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.217878 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data-custom\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.218891 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.219933 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-combined-ca-bundle\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.240149 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4sm8\" (UniqueName: \"kubernetes.io/projected/64646a57-b496-4bf3-8b63-d53321316304-kube-api-access-n4sm8\") pod \"barbican-worker-5cd9f7788c-9rhk8\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.294112 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.295613 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.305217 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58774fdb8b-5j5kb"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.307048 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.311138 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315665 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqsp\" (UniqueName: \"kubernetes.io/projected/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-kube-api-access-qpqsp\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315716 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315739 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-config\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315780 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data-custom\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315850 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315891 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-combined-ca-bundle\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315916 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-svc\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.315941 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.316017 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r5gp\" (UniqueName: \"kubernetes.io/projected/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-kube-api-access-7r5gp\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.316041 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-logs\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.316086 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.318909 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-svc\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.319096 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.319644 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-config\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.319960 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58774fdb8b-5j5kb"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.320216 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.321089 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.335438 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.339488 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-logs\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.341404 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqsp\" (UniqueName: \"kubernetes.io/projected/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-kube-api-access-qpqsp\") pod \"dnsmasq-dns-6554f656b5-b6h97\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.343182 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data-custom\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.343787 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r5gp\" (UniqueName: \"kubernetes.io/projected/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-kube-api-access-7r5gp\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.343773 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.349819 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-combined-ca-bundle\") pod \"barbican-keystone-listener-698d7dfbbb-d88kl\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.359315 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.366830 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.396386 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.417337 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5dr\" (UniqueName: \"kubernetes.io/projected/9f552c7e-3cf3-40e4-8afd-817b1e46302c-kube-api-access-bl5dr\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.417397 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f552c7e-3cf3-40e4-8afd-817b1e46302c-logs\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.417430 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.417504 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data-custom\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.417521 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-combined-ca-bundle\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.447706 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.518902 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data-custom\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.519283 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-combined-ca-bundle\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.519472 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5dr\" (UniqueName: \"kubernetes.io/projected/9f552c7e-3cf3-40e4-8afd-817b1e46302c-kube-api-access-bl5dr\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.519513 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f552c7e-3cf3-40e4-8afd-817b1e46302c-logs\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.520811 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f552c7e-3cf3-40e4-8afd-817b1e46302c-logs\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.521154 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.524566 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-combined-ca-bundle\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.527125 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data-custom\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.542244 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.549185 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5dr\" (UniqueName: \"kubernetes.io/projected/9f552c7e-3cf3-40e4-8afd-817b1e46302c-kube-api-access-bl5dr\") pod \"barbican-api-58774fdb8b-5j5kb\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.640084 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.723171 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cd9f7788c-9rhk8"] Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.758351 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55866f486f-t9ft2" event={"ID":"1f6d6643-926c-4d0d-8986-a7c56e748e3f","Type":"ContainerStarted","Data":"468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4"} Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.758395 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55866f486f-t9ft2" event={"ID":"1f6d6643-926c-4d0d-8986-a7c56e748e3f","Type":"ContainerStarted","Data":"cd71da642a5c21e3b45fcf93be3685bb0d8fe5759453adf3438a5efc81be2db5"} Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.758657 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.775365 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" event={"ID":"64646a57-b496-4bf3-8b63-d53321316304","Type":"ContainerStarted","Data":"cf05763a6a3afc9c6044d15f18f630f4d0ebc978daa2ade57f18a815bc609544"} Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.778285 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.778302 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.780406 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-55866f486f-t9ft2" podStartSLOduration=2.780387852 podStartE2EDuration="2.780387852s" podCreationTimestamp="2026-01-28 16:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:56.778566752 +0000 UTC m=+1229.054538273" watchObservedRunningTime="2026-01-28 16:05:56.780387852 +0000 UTC m=+1229.056359363" Jan 28 16:05:56 crc kubenswrapper[4903]: I0128 16:05:56.805459 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-698d7dfbbb-d88kl"] Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.104405 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-b6h97"] Jan 28 16:05:57 crc kubenswrapper[4903]: W0128 16:05:57.105144 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d5bac5_56df_467e_a02c_9e2e0d86f3ca.slice/crio-54dcf25999c87a9d71f5a2ea67c19474ce8f1fc61fe68efa9e4bfab119dd1ec1 WatchSource:0}: Error finding container 54dcf25999c87a9d71f5a2ea67c19474ce8f1fc61fe68efa9e4bfab119dd1ec1: Status 404 returned error can't find the container with id 54dcf25999c87a9d71f5a2ea67c19474ce8f1fc61fe68efa9e4bfab119dd1ec1 Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.215549 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58774fdb8b-5j5kb"] Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.804346 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58774fdb8b-5j5kb" event={"ID":"9f552c7e-3cf3-40e4-8afd-817b1e46302c","Type":"ContainerStarted","Data":"69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a"} Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.804711 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58774fdb8b-5j5kb" event={"ID":"9f552c7e-3cf3-40e4-8afd-817b1e46302c","Type":"ContainerStarted","Data":"aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe"} Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.804724 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58774fdb8b-5j5kb" event={"ID":"9f552c7e-3cf3-40e4-8afd-817b1e46302c","Type":"ContainerStarted","Data":"53309f5fab926ea3ffd86630f179c7485b336a5b5ededdf3108417013f9f862e"} Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.804759 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.804782 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.810247 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" event={"ID":"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9","Type":"ContainerStarted","Data":"76e5a9fe1b05d7b4578120a0f31a2b3fe045b4a8f73ddaffc391b45091ddb9c5"} Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.817126 4903 generic.go:334] "Generic (PLEG): container finished" podID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerID="632e3391107c4240634e273ea5d1f8da2c43dc4bda1903457309b1f160ab2508" exitCode=0 Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.817185 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" event={"ID":"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca","Type":"ContainerDied","Data":"632e3391107c4240634e273ea5d1f8da2c43dc4bda1903457309b1f160ab2508"} Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.817235 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" event={"ID":"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca","Type":"ContainerStarted","Data":"54dcf25999c87a9d71f5a2ea67c19474ce8f1fc61fe68efa9e4bfab119dd1ec1"} Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.824463 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58774fdb8b-5j5kb" podStartSLOduration=1.8244478979999998 podStartE2EDuration="1.824447898s" podCreationTimestamp="2026-01-28 16:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:05:57.822923117 +0000 UTC m=+1230.098894628" watchObservedRunningTime="2026-01-28 16:05:57.824447898 +0000 UTC m=+1230.100419409" Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.889770 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.890046 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.964311 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:57 crc kubenswrapper[4903]: I0128 16:05:57.966998 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:58 crc kubenswrapper[4903]: I0128 16:05:58.827878 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:58 crc kubenswrapper[4903]: I0128 16:05:58.827933 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.390525 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.390961 4903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.420215 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79d7544958-xm4mt"] Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.421930 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.424949 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.438756 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79d7544958-xm4mt"] Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.439186 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.511937 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.587347 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwbz\" (UniqueName: \"kubernetes.io/projected/438d1db6-7b20-4f31-8a43-aa8f0c972501-kube-api-access-pdwbz\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.587746 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data-custom\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.587808 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-public-tls-certs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.588133 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-combined-ca-bundle\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.588231 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-internal-tls-certs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.588296 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.588344 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438d1db6-7b20-4f31-8a43-aa8f0c972501-logs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690062 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-combined-ca-bundle\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690127 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-internal-tls-certs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690150 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690190 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438d1db6-7b20-4f31-8a43-aa8f0c972501-logs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690237 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdwbz\" (UniqueName: \"kubernetes.io/projected/438d1db6-7b20-4f31-8a43-aa8f0c972501-kube-api-access-pdwbz\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690262 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data-custom\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690314 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-public-tls-certs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.690915 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438d1db6-7b20-4f31-8a43-aa8f0c972501-logs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.694967 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-combined-ca-bundle\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.695763 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data-custom\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.697247 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-public-tls-certs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.713275 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-internal-tls-certs\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.713815 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdwbz\" (UniqueName: \"kubernetes.io/projected/438d1db6-7b20-4f31-8a43-aa8f0c972501-kube-api-access-pdwbz\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.724754 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data\") pod \"barbican-api-79d7544958-xm4mt\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:05:59 crc kubenswrapper[4903]: I0128 16:05:59.739361 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:06:00 crc kubenswrapper[4903]: I0128 16:06:00.858716 4903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 16:06:00 crc kubenswrapper[4903]: I0128 16:06:00.859047 4903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 16:06:00 crc kubenswrapper[4903]: I0128 16:06:00.950363 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 16:06:01 crc kubenswrapper[4903]: I0128 16:06:01.168730 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 16:06:04 crc kubenswrapper[4903]: I0128 16:06:04.891479 4903 generic.go:334] "Generic (PLEG): container finished" podID="3f168baf-cfa3-4403-825f-ed1a8e92beca" containerID="c7bf1e8f41ac47e5ad10262b8826c4d9516f64bb9a727ac6db342e3fd3db3370" exitCode=0 Jan 28 16:06:04 crc kubenswrapper[4903]: I0128 16:06:04.891590 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f6twx" event={"ID":"3f168baf-cfa3-4403-825f-ed1a8e92beca","Type":"ContainerDied","Data":"c7bf1e8f41ac47e5ad10262b8826c4d9516f64bb9a727ac6db342e3fd3db3370"} Jan 28 16:06:05 crc kubenswrapper[4903]: E0128 16:06:05.824170 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.914365 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" event={"ID":"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9","Type":"ContainerStarted","Data":"6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228"} Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.914601 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79d7544958-xm4mt"] Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.920274 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" event={"ID":"64646a57-b496-4bf3-8b63-d53321316304","Type":"ContainerStarted","Data":"6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82"} Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.927176 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" event={"ID":"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca","Type":"ContainerStarted","Data":"ba9719234409e77c7d6cc555d76f304aa157ad008d6da259306237c307202308"} Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.927439 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.930355 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="ceilometer-notification-agent" containerID="cri-o://7c899a40cb4d581cb29edcc1c065da074f3e19182437333ec6d052e29b059c3e" gracePeriod=30 Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.930593 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerStarted","Data":"d03d2c3b6fa852c536c892e871da78b52ed138599f5ace92b45dcf8bfe382314"} Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.930657 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.930684 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="proxy-httpd" containerID="cri-o://d03d2c3b6fa852c536c892e871da78b52ed138599f5ace92b45dcf8bfe382314" gracePeriod=30 Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.930702 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="sg-core" containerID="cri-o://45ed9503932731b51a04f6cf84c64e972433f15f11663c344b453b1f74835228" gracePeriod=30 Jan 28 16:06:05 crc kubenswrapper[4903]: W0128 16:06:05.934972 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod438d1db6_7b20_4f31_8a43_aa8f0c972501.slice/crio-5468460068ba7936c0546ff6b356daa0181d7982dd39ef19f47094b5b655b9e4 WatchSource:0}: Error finding container 5468460068ba7936c0546ff6b356daa0181d7982dd39ef19f47094b5b655b9e4: Status 404 returned error can't find the container with id 5468460068ba7936c0546ff6b356daa0181d7982dd39ef19f47094b5b655b9e4 Jan 28 16:06:05 crc kubenswrapper[4903]: I0128 16:06:05.961584 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" podStartSLOduration=9.961560581 podStartE2EDuration="9.961560581s" podCreationTimestamp="2026-01-28 16:05:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:05.945810772 +0000 UTC m=+1238.221782283" watchObservedRunningTime="2026-01-28 16:06:05.961560581 +0000 UTC m=+1238.237532112" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.219080 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f6twx" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.319234 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-combined-ca-bundle\") pod \"3f168baf-cfa3-4403-825f-ed1a8e92beca\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.319482 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pfn8\" (UniqueName: \"kubernetes.io/projected/3f168baf-cfa3-4403-825f-ed1a8e92beca-kube-api-access-9pfn8\") pod \"3f168baf-cfa3-4403-825f-ed1a8e92beca\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.319570 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-config\") pod \"3f168baf-cfa3-4403-825f-ed1a8e92beca\" (UID: \"3f168baf-cfa3-4403-825f-ed1a8e92beca\") " Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.328273 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f168baf-cfa3-4403-825f-ed1a8e92beca-kube-api-access-9pfn8" (OuterVolumeSpecName: "kube-api-access-9pfn8") pod "3f168baf-cfa3-4403-825f-ed1a8e92beca" (UID: "3f168baf-cfa3-4403-825f-ed1a8e92beca"). InnerVolumeSpecName "kube-api-access-9pfn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.357781 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f168baf-cfa3-4403-825f-ed1a8e92beca" (UID: "3f168baf-cfa3-4403-825f-ed1a8e92beca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.357993 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-config" (OuterVolumeSpecName: "config") pod "3f168baf-cfa3-4403-825f-ed1a8e92beca" (UID: "3f168baf-cfa3-4403-825f-ed1a8e92beca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.421162 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pfn8\" (UniqueName: \"kubernetes.io/projected/3f168baf-cfa3-4403-825f-ed1a8e92beca-kube-api-access-9pfn8\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.421197 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.421212 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f168baf-cfa3-4403-825f-ed1a8e92beca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.947216 4903 generic.go:334] "Generic (PLEG): container finished" podID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerID="d03d2c3b6fa852c536c892e871da78b52ed138599f5ace92b45dcf8bfe382314" exitCode=0 Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.947644 4903 generic.go:334] "Generic (PLEG): container finished" podID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerID="45ed9503932731b51a04f6cf84c64e972433f15f11663c344b453b1f74835228" exitCode=2 Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.947713 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerDied","Data":"d03d2c3b6fa852c536c892e871da78b52ed138599f5ace92b45dcf8bfe382314"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.947753 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerDied","Data":"45ed9503932731b51a04f6cf84c64e972433f15f11663c344b453b1f74835228"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.950729 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f6twx" event={"ID":"3f168baf-cfa3-4403-825f-ed1a8e92beca","Type":"ContainerDied","Data":"2f14a8e6570081278a03878a29cb6110720759ffc05ca7173bc560fa7048f1c3"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.950813 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f14a8e6570081278a03878a29cb6110720759ffc05ca7173bc560fa7048f1c3" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.950897 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f6twx" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.962135 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj6nt" event={"ID":"cee91865-9bfc-44d2-a0e3-87a4b309ad7e","Type":"ContainerStarted","Data":"6d811b9422e35f2b1a84be2e0cb79a920072e49aade0e343dd02d1459cc291c2"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.966695 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" event={"ID":"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9","Type":"ContainerStarted","Data":"c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.968325 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" event={"ID":"64646a57-b496-4bf3-8b63-d53321316304","Type":"ContainerStarted","Data":"deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.971198 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d7544958-xm4mt" event={"ID":"438d1db6-7b20-4f31-8a43-aa8f0c972501","Type":"ContainerStarted","Data":"4ab5c17cdbc07a22bc6e3f55c4de9ca0284d8300cd938b4df77da1ec21f7ea19"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.971247 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d7544958-xm4mt" event={"ID":"438d1db6-7b20-4f31-8a43-aa8f0c972501","Type":"ContainerStarted","Data":"f5c9a79fdf1fdd76ebd49ee1d6512d0b2f33149f5da0dd564a2edc3e7102a0f1"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.971260 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d7544958-xm4mt" event={"ID":"438d1db6-7b20-4f31-8a43-aa8f0c972501","Type":"ContainerStarted","Data":"5468460068ba7936c0546ff6b356daa0181d7982dd39ef19f47094b5b655b9e4"} Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.971304 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.971470 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:06:06 crc kubenswrapper[4903]: I0128 16:06:06.983403 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-gj6nt" podStartSLOduration=3.023188247 podStartE2EDuration="47.98338801s" podCreationTimestamp="2026-01-28 16:05:19 +0000 UTC" firstStartedPulling="2026-01-28 16:05:20.614802122 +0000 UTC m=+1192.890773633" lastFinishedPulling="2026-01-28 16:06:05.575001885 +0000 UTC m=+1237.850973396" observedRunningTime="2026-01-28 16:06:06.978785655 +0000 UTC m=+1239.254757156" watchObservedRunningTime="2026-01-28 16:06:06.98338801 +0000 UTC m=+1239.259359521" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.002961 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" podStartSLOduration=2.34411551 podStartE2EDuration="11.002935893s" podCreationTimestamp="2026-01-28 16:05:56 +0000 UTC" firstStartedPulling="2026-01-28 16:05:56.837143819 +0000 UTC m=+1229.113115330" lastFinishedPulling="2026-01-28 16:06:05.495964192 +0000 UTC m=+1237.771935713" observedRunningTime="2026-01-28 16:06:06.997859874 +0000 UTC m=+1239.273831385" watchObservedRunningTime="2026-01-28 16:06:07.002935893 +0000 UTC m=+1239.278907404" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.040008 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79d7544958-xm4mt" podStartSLOduration=8.039985032 podStartE2EDuration="8.039985032s" podCreationTimestamp="2026-01-28 16:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:07.029417505 +0000 UTC m=+1239.305389016" watchObservedRunningTime="2026-01-28 16:06:07.039985032 +0000 UTC m=+1239.315956543" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.067728 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" podStartSLOduration=3.316896829 podStartE2EDuration="12.067706888s" podCreationTimestamp="2026-01-28 16:05:55 +0000 UTC" firstStartedPulling="2026-01-28 16:05:56.733579366 +0000 UTC m=+1229.009550877" lastFinishedPulling="2026-01-28 16:06:05.484389425 +0000 UTC m=+1237.760360936" observedRunningTime="2026-01-28 16:06:07.063677608 +0000 UTC m=+1239.339649119" watchObservedRunningTime="2026-01-28 16:06:07.067706888 +0000 UTC m=+1239.343678399" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.180051 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-b6h97"] Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.216639 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-756cdffcb8-s2nn9"] Jan 28 16:06:07 crc kubenswrapper[4903]: E0128 16:06:07.217552 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f168baf-cfa3-4403-825f-ed1a8e92beca" containerName="neutron-db-sync" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.217578 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f168baf-cfa3-4403-825f-ed1a8e92beca" containerName="neutron-db-sync" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.217982 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f168baf-cfa3-4403-825f-ed1a8e92beca" containerName="neutron-db-sync" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.224111 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.226394 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.229141 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.229677 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.229897 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-c2wqr" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.252735 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2grm\" (UniqueName: \"kubernetes.io/projected/41c983f0-cfa7-48aa-9021-e570c07c4c43-kube-api-access-k2grm\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.252805 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-combined-ca-bundle\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.252848 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-config\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.252918 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-httpd-config\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.252961 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-ovndb-tls-certs\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.274801 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-756cdffcb8-s2nn9"] Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.284157 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-8jk6g"] Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.286306 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.319275 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-8jk6g"] Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.357972 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358037 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-httpd-config\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358073 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358109 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-ovndb-tls-certs\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358137 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358157 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqxbw\" (UniqueName: \"kubernetes.io/projected/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-kube-api-access-kqxbw\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358177 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358244 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2grm\" (UniqueName: \"kubernetes.io/projected/41c983f0-cfa7-48aa-9021-e570c07c4c43-kube-api-access-k2grm\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358267 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-combined-ca-bundle\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358288 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-config\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.358313 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-config\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.365748 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-combined-ca-bundle\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.366036 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-httpd-config\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.366301 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-ovndb-tls-certs\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.370549 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-config\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.380639 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2grm\" (UniqueName: \"kubernetes.io/projected/41c983f0-cfa7-48aa-9021-e570c07c4c43-kube-api-access-k2grm\") pod \"neutron-756cdffcb8-s2nn9\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.459230 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.460008 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.460343 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.460726 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.460765 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqxbw\" (UniqueName: \"kubernetes.io/projected/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-kube-api-access-kqxbw\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.460824 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.461050 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-config\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.461905 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.462130 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.462640 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.463281 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-config\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.484743 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqxbw\" (UniqueName: \"kubernetes.io/projected/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-kube-api-access-kqxbw\") pod \"dnsmasq-dns-7bdf86f46f-8jk6g\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.567721 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:07 crc kubenswrapper[4903]: I0128 16:06:07.614083 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:08 crc kubenswrapper[4903]: I0128 16:06:08.011040 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" podUID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerName="dnsmasq-dns" containerID="cri-o://ba9719234409e77c7d6cc555d76f304aa157ad008d6da259306237c307202308" gracePeriod=10 Jan 28 16:06:08 crc kubenswrapper[4903]: I0128 16:06:08.239197 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-756cdffcb8-s2nn9"] Jan 28 16:06:08 crc kubenswrapper[4903]: I0128 16:06:08.262708 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-8jk6g"] Jan 28 16:06:08 crc kubenswrapper[4903]: I0128 16:06:08.333924 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.014897 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" event={"ID":"691f7d2f-fc86-4b14-b6c9-2799a4b384e2","Type":"ContainerStarted","Data":"105a3714f92b036a62de03d1ddcfda814e91804ff990dd3cfc9483b871638523"} Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.018069 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756cdffcb8-s2nn9" event={"ID":"41c983f0-cfa7-48aa-9021-e570c07c4c43","Type":"ContainerStarted","Data":"c83e611f2a7f55d2ba25169d661f96f0a06d3fccac6aec0d905f91d893f0275e"} Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.019960 4903 generic.go:334] "Generic (PLEG): container finished" podID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerID="ba9719234409e77c7d6cc555d76f304aa157ad008d6da259306237c307202308" exitCode=0 Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.019994 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" event={"ID":"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca","Type":"ContainerDied","Data":"ba9719234409e77c7d6cc555d76f304aa157ad008d6da259306237c307202308"} Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.023203 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.026831 4903 generic.go:334] "Generic (PLEG): container finished" podID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerID="7c899a40cb4d581cb29edcc1c065da074f3e19182437333ec6d052e29b059c3e" exitCode=0 Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.026942 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerDied","Data":"7c899a40cb4d581cb29edcc1c065da074f3e19182437333ec6d052e29b059c3e"} Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.181305 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.291131 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-log-httpd\") pod \"ec81a835-dc41-4420-87e9-8eb5efe75894\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.291208 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-config-data\") pod \"ec81a835-dc41-4420-87e9-8eb5efe75894\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.291269 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-sg-core-conf-yaml\") pod \"ec81a835-dc41-4420-87e9-8eb5efe75894\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.291345 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-run-httpd\") pod \"ec81a835-dc41-4420-87e9-8eb5efe75894\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.291379 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qg2w\" (UniqueName: \"kubernetes.io/projected/ec81a835-dc41-4420-87e9-8eb5efe75894-kube-api-access-2qg2w\") pod \"ec81a835-dc41-4420-87e9-8eb5efe75894\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.291409 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-scripts\") pod \"ec81a835-dc41-4420-87e9-8eb5efe75894\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.291441 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-combined-ca-bundle\") pod \"ec81a835-dc41-4420-87e9-8eb5efe75894\" (UID: \"ec81a835-dc41-4420-87e9-8eb5efe75894\") " Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.294846 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec81a835-dc41-4420-87e9-8eb5efe75894" (UID: "ec81a835-dc41-4420-87e9-8eb5efe75894"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.294975 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec81a835-dc41-4420-87e9-8eb5efe75894" (UID: "ec81a835-dc41-4420-87e9-8eb5efe75894"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.318820 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-scripts" (OuterVolumeSpecName: "scripts") pod "ec81a835-dc41-4420-87e9-8eb5efe75894" (UID: "ec81a835-dc41-4420-87e9-8eb5efe75894"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.327346 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec81a835-dc41-4420-87e9-8eb5efe75894-kube-api-access-2qg2w" (OuterVolumeSpecName: "kube-api-access-2qg2w") pod "ec81a835-dc41-4420-87e9-8eb5efe75894" (UID: "ec81a835-dc41-4420-87e9-8eb5efe75894"). InnerVolumeSpecName "kube-api-access-2qg2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.333696 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec81a835-dc41-4420-87e9-8eb5efe75894" (UID: "ec81a835-dc41-4420-87e9-8eb5efe75894"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.362746 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec81a835-dc41-4420-87e9-8eb5efe75894" (UID: "ec81a835-dc41-4420-87e9-8eb5efe75894"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.378359 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-config-data" (OuterVolumeSpecName: "config-data") pod "ec81a835-dc41-4420-87e9-8eb5efe75894" (UID: "ec81a835-dc41-4420-87e9-8eb5efe75894"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.393676 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.393762 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.393830 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.393841 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qg2w\" (UniqueName: \"kubernetes.io/projected/ec81a835-dc41-4420-87e9-8eb5efe75894-kube-api-access-2qg2w\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.393850 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.393857 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec81a835-dc41-4420-87e9-8eb5efe75894-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:09 crc kubenswrapper[4903]: I0128 16:06:09.393864 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec81a835-dc41-4420-87e9-8eb5efe75894-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.036955 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" event={"ID":"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca","Type":"ContainerDied","Data":"54dcf25999c87a9d71f5a2ea67c19474ce8f1fc61fe68efa9e4bfab119dd1ec1"} Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.037499 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54dcf25999c87a9d71f5a2ea67c19474ce8f1fc61fe68efa9e4bfab119dd1ec1" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.038778 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec81a835-dc41-4420-87e9-8eb5efe75894","Type":"ContainerDied","Data":"b50706edb6a7c4f4029f07a45f1ebe165f427fd03b51ab028c9c63ef3d18faa6"} Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.038823 4903 scope.go:117] "RemoveContainer" containerID="d03d2c3b6fa852c536c892e871da78b52ed138599f5ace92b45dcf8bfe382314" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.039012 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.040734 4903 generic.go:334] "Generic (PLEG): container finished" podID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerID="e457863635d09b126b806ff8bb8af8825bbebcfbe9ef6d06ad4336fbc3bd8a67" exitCode=0 Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.040801 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" event={"ID":"691f7d2f-fc86-4b14-b6c9-2799a4b384e2","Type":"ContainerDied","Data":"e457863635d09b126b806ff8bb8af8825bbebcfbe9ef6d06ad4336fbc3bd8a67"} Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.050090 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756cdffcb8-s2nn9" event={"ID":"41c983f0-cfa7-48aa-9021-e570c07c4c43","Type":"ContainerStarted","Data":"01040a11f788e4571e2ef7dad1033cf47b4b204a8fef5289b42053b81549198c"} Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.127773 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.180260 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.206191 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.208070 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-nb\") pod \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.208175 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-swift-storage-0\") pod \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.208478 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpqsp\" (UniqueName: \"kubernetes.io/projected/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-kube-api-access-qpqsp\") pod \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.209113 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-svc\") pod \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.209242 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-config\") pod \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.209275 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-sb\") pod \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\" (UID: \"e2d5bac5-56df-467e-a02c-9e2e0d86f3ca\") " Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.218042 4903 scope.go:117] "RemoveContainer" containerID="45ed9503932731b51a04f6cf84c64e972433f15f11663c344b453b1f74835228" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.221962 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-kube-api-access-qpqsp" (OuterVolumeSpecName: "kube-api-access-qpqsp") pod "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" (UID: "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca"). InnerVolumeSpecName "kube-api-access-qpqsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.242993 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:10 crc kubenswrapper[4903]: E0128 16:06:10.243451 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerName="dnsmasq-dns" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243463 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerName="dnsmasq-dns" Jan 28 16:06:10 crc kubenswrapper[4903]: E0128 16:06:10.243473 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="sg-core" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243479 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="sg-core" Jan 28 16:06:10 crc kubenswrapper[4903]: E0128 16:06:10.243503 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="ceilometer-notification-agent" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243512 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="ceilometer-notification-agent" Jan 28 16:06:10 crc kubenswrapper[4903]: E0128 16:06:10.243608 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="proxy-httpd" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243618 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="proxy-httpd" Jan 28 16:06:10 crc kubenswrapper[4903]: E0128 16:06:10.243637 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerName="init" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243643 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerName="init" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243834 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="ceilometer-notification-agent" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243848 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" containerName="dnsmasq-dns" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243860 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="proxy-httpd" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.243874 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" containerName="sg-core" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.245501 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.251599 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.252155 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.260433 4903 scope.go:117] "RemoveContainer" containerID="7c899a40cb4d581cb29edcc1c065da074f3e19182437333ec6d052e29b059c3e" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.283557 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.284240 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" (UID: "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.301374 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" (UID: "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.302213 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-config" (OuterVolumeSpecName: "config") pod "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" (UID: "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.311281 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" (UID: "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.323705 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.323748 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.323764 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.323777 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.323789 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpqsp\" (UniqueName: \"kubernetes.io/projected/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-kube-api-access-qpqsp\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.337408 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" (UID: "e2d5bac5-56df-467e-a02c-9e2e0d86f3ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.353365 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-df7b7b7fc-j8ps6"] Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.355631 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.365202 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.365812 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.367381 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df7b7b7fc-j8ps6"] Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.428276 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.428593 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.428728 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-log-httpd\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.428842 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpppl\" (UniqueName: \"kubernetes.io/projected/cae03240-8c2e-463e-a674-10c21514d9cd-kube-api-access-bpppl\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.428974 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-config-data\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.429077 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-scripts\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.429747 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-run-httpd\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.429853 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.428391 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec81a835-dc41-4420-87e9-8eb5efe75894" path="/var/lib/kubelet/pods/ec81a835-dc41-4420-87e9-8eb5efe75894/volumes" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.531205 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.531270 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.531307 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-public-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.531384 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-log-httpd\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.531453 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-internal-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.531485 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-httpd-config\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.531521 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpppl\" (UniqueName: \"kubernetes.io/projected/cae03240-8c2e-463e-a674-10c21514d9cd-kube-api-access-bpppl\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.532604 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-config-data\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.532640 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-log-httpd\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.532726 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-combined-ca-bundle\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.532799 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-config\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.532877 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k549f\" (UniqueName: \"kubernetes.io/projected/777a1f56-3b78-4161-b388-22d924bf442c-kube-api-access-k549f\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.533290 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-scripts\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.533517 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-run-httpd\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.538337 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.538910 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.538941 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-scripts\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.539885 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-ovndb-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.540059 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-run-httpd\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.540273 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-config-data\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.560103 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpppl\" (UniqueName: \"kubernetes.io/projected/cae03240-8c2e-463e-a674-10c21514d9cd-kube-api-access-bpppl\") pod \"ceilometer-0\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.579357 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.641668 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-ovndb-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.641748 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-public-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.641812 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-internal-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.641842 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-httpd-config\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.641892 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-combined-ca-bundle\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.641930 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-config\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.641961 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k549f\" (UniqueName: \"kubernetes.io/projected/777a1f56-3b78-4161-b388-22d924bf442c-kube-api-access-k549f\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.650468 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-ovndb-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.650830 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-httpd-config\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.652432 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-config\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.652060 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-combined-ca-bundle\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.666373 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-internal-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.666986 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-public-tls-certs\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.674457 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k549f\" (UniqueName: \"kubernetes.io/projected/777a1f56-3b78-4161-b388-22d924bf442c-kube-api-access-k549f\") pod \"neutron-df7b7b7fc-j8ps6\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:10 crc kubenswrapper[4903]: I0128 16:06:10.693476 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.061293 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.065408 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" event={"ID":"691f7d2f-fc86-4b14-b6c9-2799a4b384e2","Type":"ContainerStarted","Data":"3ab473d6bf7c4e850de97bffd626c7cfcf68edd83030794c39f1d2dbacd3494f"} Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.066609 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.072741 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756cdffcb8-s2nn9" event={"ID":"41c983f0-cfa7-48aa-9021-e570c07c4c43","Type":"ContainerStarted","Data":"d6878bfb9cae3d3ccde58fc06ed4eea3ec4003552e092da20838ae81544d9587"} Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.072950 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.076133 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-b6h97" Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.099350 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" podStartSLOduration=4.099326268 podStartE2EDuration="4.099326268s" podCreationTimestamp="2026-01-28 16:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:11.088323538 +0000 UTC m=+1243.364295049" watchObservedRunningTime="2026-01-28 16:06:11.099326268 +0000 UTC m=+1243.375297769" Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.108210 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-756cdffcb8-s2nn9" podStartSLOduration=4.108191159 podStartE2EDuration="4.108191159s" podCreationTimestamp="2026-01-28 16:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:11.104790366 +0000 UTC m=+1243.380761887" watchObservedRunningTime="2026-01-28 16:06:11.108191159 +0000 UTC m=+1243.384162670" Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.130704 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-b6h97"] Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.142364 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-b6h97"] Jan 28 16:06:11 crc kubenswrapper[4903]: I0128 16:06:11.250026 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-df7b7b7fc-j8ps6"] Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.089664 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df7b7b7fc-j8ps6" event={"ID":"777a1f56-3b78-4161-b388-22d924bf442c","Type":"ContainerStarted","Data":"7144e9f3e379f3b1c48972a79f95a4ca58fc84bde1c3b98a44aa1c439247a433"} Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.090355 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.090369 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df7b7b7fc-j8ps6" event={"ID":"777a1f56-3b78-4161-b388-22d924bf442c","Type":"ContainerStarted","Data":"57f5aead75f7ccb66670a88b340768f4042e67c223d457f4586543c309862540"} Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.090381 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df7b7b7fc-j8ps6" event={"ID":"777a1f56-3b78-4161-b388-22d924bf442c","Type":"ContainerStarted","Data":"58ab758700768ed2a02ccd2d856851248ce75d0485b59989c49a652a32abcc68"} Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.091246 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerStarted","Data":"88f6a681447d9ba825540a06af50abf1bd04892418ed932aad1a8245843100e9"} Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.091297 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerStarted","Data":"985e89a7f4b3c100b028adff26e257288c6f56563544d67de1b5f7af3e79d7ed"} Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.121892 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-df7b7b7fc-j8ps6" podStartSLOduration=2.121867487 podStartE2EDuration="2.121867487s" podCreationTimestamp="2026-01-28 16:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:12.110128676 +0000 UTC m=+1244.386100207" watchObservedRunningTime="2026-01-28 16:06:12.121867487 +0000 UTC m=+1244.397838998" Jan 28 16:06:12 crc kubenswrapper[4903]: I0128 16:06:12.423192 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d5bac5-56df-467e-a02c-9e2e0d86f3ca" path="/var/lib/kubelet/pods/e2d5bac5-56df-467e-a02c-9e2e0d86f3ca/volumes" Jan 28 16:06:13 crc kubenswrapper[4903]: I0128 16:06:13.104276 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerStarted","Data":"0af711fbec9350590df98e28a67d803dda37f2df8c8baafe45cedb6b953dc632"} Jan 28 16:06:14 crc kubenswrapper[4903]: I0128 16:06:14.114901 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerStarted","Data":"a687bd805af1065d8802f7bf7931b2436e411acdef4e7541da621edeb435e77e"} Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.136258 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerStarted","Data":"c2f0248a0c24bd29c5e5b3a6141300bb0fdcbe281e614df5506ce6bd943af92d"} Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.136896 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.166712 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.132563958 podStartE2EDuration="6.166696696s" podCreationTimestamp="2026-01-28 16:06:10 +0000 UTC" firstStartedPulling="2026-01-28 16:06:11.066708118 +0000 UTC m=+1243.342679629" lastFinishedPulling="2026-01-28 16:06:15.100840856 +0000 UTC m=+1247.376812367" observedRunningTime="2026-01-28 16:06:16.163144929 +0000 UTC m=+1248.439116470" watchObservedRunningTime="2026-01-28 16:06:16.166696696 +0000 UTC m=+1248.442668207" Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.288355 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.359015 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.446703 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58774fdb8b-5j5kb"] Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.447003 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58774fdb8b-5j5kb" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api-log" containerID="cri-o://aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe" gracePeriod=30 Jan 28 16:06:16 crc kubenswrapper[4903]: I0128 16:06:16.447137 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58774fdb8b-5j5kb" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api" containerID="cri-o://69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a" gracePeriod=30 Jan 28 16:06:17 crc kubenswrapper[4903]: I0128 16:06:17.147231 4903 generic.go:334] "Generic (PLEG): container finished" podID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerID="aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe" exitCode=143 Jan 28 16:06:17 crc kubenswrapper[4903]: I0128 16:06:17.147316 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58774fdb8b-5j5kb" event={"ID":"9f552c7e-3cf3-40e4-8afd-817b1e46302c","Type":"ContainerDied","Data":"aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe"} Jan 28 16:06:17 crc kubenswrapper[4903]: I0128 16:06:17.615719 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:17 crc kubenswrapper[4903]: I0128 16:06:17.689309 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-57gmd"] Jan 28 16:06:17 crc kubenswrapper[4903]: I0128 16:06:17.689563 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" podUID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerName="dnsmasq-dns" containerID="cri-o://630d1568fb7af1b219114384dc4e2056041faa5abd0a851fa1ecc695972d5996" gracePeriod=10 Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.158405 4903 generic.go:334] "Generic (PLEG): container finished" podID="cee91865-9bfc-44d2-a0e3-87a4b309ad7e" containerID="6d811b9422e35f2b1a84be2e0cb79a920072e49aade0e343dd02d1459cc291c2" exitCode=0 Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.158517 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj6nt" event={"ID":"cee91865-9bfc-44d2-a0e3-87a4b309ad7e","Type":"ContainerDied","Data":"6d811b9422e35f2b1a84be2e0cb79a920072e49aade0e343dd02d1459cc291c2"} Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.160686 4903 generic.go:334] "Generic (PLEG): container finished" podID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerID="630d1568fb7af1b219114384dc4e2056041faa5abd0a851fa1ecc695972d5996" exitCode=0 Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.160724 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" event={"ID":"9ec50878-cd94-43f7-a0ee-750e2f0ffc95","Type":"ContainerDied","Data":"630d1568fb7af1b219114384dc4e2056041faa5abd0a851fa1ecc695972d5996"} Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.160747 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" event={"ID":"9ec50878-cd94-43f7-a0ee-750e2f0ffc95","Type":"ContainerDied","Data":"58fa28499b83464ebdad7f18d709f1b6b4ff7f87746828d807f56b70589eeba6"} Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.160759 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58fa28499b83464ebdad7f18d709f1b6b4ff7f87746828d807f56b70589eeba6" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.198830 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.334142 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-nb\") pod \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.334199 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dn8b\" (UniqueName: \"kubernetes.io/projected/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-kube-api-access-4dn8b\") pod \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.334329 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-sb\") pod \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.334361 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-svc\") pod \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.334386 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-swift-storage-0\") pod \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.334987 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-config\") pod \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\" (UID: \"9ec50878-cd94-43f7-a0ee-750e2f0ffc95\") " Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.339662 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-kube-api-access-4dn8b" (OuterVolumeSpecName: "kube-api-access-4dn8b") pod "9ec50878-cd94-43f7-a0ee-750e2f0ffc95" (UID: "9ec50878-cd94-43f7-a0ee-750e2f0ffc95"). InnerVolumeSpecName "kube-api-access-4dn8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.382079 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ec50878-cd94-43f7-a0ee-750e2f0ffc95" (UID: "9ec50878-cd94-43f7-a0ee-750e2f0ffc95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.384279 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ec50878-cd94-43f7-a0ee-750e2f0ffc95" (UID: "9ec50878-cd94-43f7-a0ee-750e2f0ffc95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.384393 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-config" (OuterVolumeSpecName: "config") pod "9ec50878-cd94-43f7-a0ee-750e2f0ffc95" (UID: "9ec50878-cd94-43f7-a0ee-750e2f0ffc95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.396234 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ec50878-cd94-43f7-a0ee-750e2f0ffc95" (UID: "9ec50878-cd94-43f7-a0ee-750e2f0ffc95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.399988 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ec50878-cd94-43f7-a0ee-750e2f0ffc95" (UID: "9ec50878-cd94-43f7-a0ee-750e2f0ffc95"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.439391 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.439439 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dn8b\" (UniqueName: \"kubernetes.io/projected/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-kube-api-access-4dn8b\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.439463 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.439477 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.439491 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:18 crc kubenswrapper[4903]: I0128 16:06:18.439506 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ec50878-cd94-43f7-a0ee-750e2f0ffc95-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.168255 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-57gmd" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.204612 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-57gmd"] Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.225691 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-57gmd"] Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.547003 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.621313 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58774fdb8b-5j5kb" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:57452->10.217.0.156:9311: read: connection reset by peer" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.621314 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58774fdb8b-5j5kb" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:57436->10.217.0.156:9311: read: connection reset by peer" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.665069 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-scripts\") pod \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.665140 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-db-sync-config-data\") pod \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.665249 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-config-data\") pod \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.665357 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4vrs\" (UniqueName: \"kubernetes.io/projected/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-kube-api-access-p4vrs\") pod \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.665377 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-combined-ca-bundle\") pod \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.665418 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-etc-machine-id\") pod \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\" (UID: \"cee91865-9bfc-44d2-a0e3-87a4b309ad7e\") " Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.665776 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cee91865-9bfc-44d2-a0e3-87a4b309ad7e" (UID: "cee91865-9bfc-44d2-a0e3-87a4b309ad7e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.670784 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-kube-api-access-p4vrs" (OuterVolumeSpecName: "kube-api-access-p4vrs") pod "cee91865-9bfc-44d2-a0e3-87a4b309ad7e" (UID: "cee91865-9bfc-44d2-a0e3-87a4b309ad7e"). InnerVolumeSpecName "kube-api-access-p4vrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.671705 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cee91865-9bfc-44d2-a0e3-87a4b309ad7e" (UID: "cee91865-9bfc-44d2-a0e3-87a4b309ad7e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.672952 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-scripts" (OuterVolumeSpecName: "scripts") pod "cee91865-9bfc-44d2-a0e3-87a4b309ad7e" (UID: "cee91865-9bfc-44d2-a0e3-87a4b309ad7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.701652 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cee91865-9bfc-44d2-a0e3-87a4b309ad7e" (UID: "cee91865-9bfc-44d2-a0e3-87a4b309ad7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.736298 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-config-data" (OuterVolumeSpecName: "config-data") pod "cee91865-9bfc-44d2-a0e3-87a4b309ad7e" (UID: "cee91865-9bfc-44d2-a0e3-87a4b309ad7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.767227 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4vrs\" (UniqueName: \"kubernetes.io/projected/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-kube-api-access-p4vrs\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.767628 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.767638 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.767650 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.767658 4903 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:19 crc kubenswrapper[4903]: I0128 16:06:19.767667 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee91865-9bfc-44d2-a0e3-87a4b309ad7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.052973 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.172679 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data\") pod \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.172743 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-combined-ca-bundle\") pod \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.172787 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl5dr\" (UniqueName: \"kubernetes.io/projected/9f552c7e-3cf3-40e4-8afd-817b1e46302c-kube-api-access-bl5dr\") pod \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.172923 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data-custom\") pod \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.172996 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f552c7e-3cf3-40e4-8afd-817b1e46302c-logs\") pod \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\" (UID: \"9f552c7e-3cf3-40e4-8afd-817b1e46302c\") " Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.173779 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f552c7e-3cf3-40e4-8afd-817b1e46302c-logs" (OuterVolumeSpecName: "logs") pod "9f552c7e-3cf3-40e4-8afd-817b1e46302c" (UID: "9f552c7e-3cf3-40e4-8afd-817b1e46302c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.177750 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f552c7e-3cf3-40e4-8afd-817b1e46302c" (UID: "9f552c7e-3cf3-40e4-8afd-817b1e46302c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.179070 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f552c7e-3cf3-40e4-8afd-817b1e46302c-kube-api-access-bl5dr" (OuterVolumeSpecName: "kube-api-access-bl5dr") pod "9f552c7e-3cf3-40e4-8afd-817b1e46302c" (UID: "9f552c7e-3cf3-40e4-8afd-817b1e46302c"). InnerVolumeSpecName "kube-api-access-bl5dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.180856 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gj6nt" event={"ID":"cee91865-9bfc-44d2-a0e3-87a4b309ad7e","Type":"ContainerDied","Data":"378a6f159e3321f5ae06130476c089aeba60033f97fe01c5aa59b5037a288ea1"} Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.180898 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="378a6f159e3321f5ae06130476c089aeba60033f97fe01c5aa59b5037a288ea1" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.180961 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gj6nt" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.188120 4903 generic.go:334] "Generic (PLEG): container finished" podID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerID="69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a" exitCode=0 Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.188168 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58774fdb8b-5j5kb" event={"ID":"9f552c7e-3cf3-40e4-8afd-817b1e46302c","Type":"ContainerDied","Data":"69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a"} Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.188207 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58774fdb8b-5j5kb" event={"ID":"9f552c7e-3cf3-40e4-8afd-817b1e46302c","Type":"ContainerDied","Data":"53309f5fab926ea3ffd86630f179c7485b336a5b5ededdf3108417013f9f862e"} Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.188233 4903 scope.go:117] "RemoveContainer" containerID="69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.188363 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58774fdb8b-5j5kb" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.205771 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f552c7e-3cf3-40e4-8afd-817b1e46302c" (UID: "9f552c7e-3cf3-40e4-8afd-817b1e46302c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.223120 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data" (OuterVolumeSpecName: "config-data") pod "9f552c7e-3cf3-40e4-8afd-817b1e46302c" (UID: "9f552c7e-3cf3-40e4-8afd-817b1e46302c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.276702 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.276759 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f552c7e-3cf3-40e4-8afd-817b1e46302c-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.276772 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.276803 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f552c7e-3cf3-40e4-8afd-817b1e46302c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.276814 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl5dr\" (UniqueName: \"kubernetes.io/projected/9f552c7e-3cf3-40e4-8afd-817b1e46302c-kube-api-access-bl5dr\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.281628 4903 scope.go:117] "RemoveContainer" containerID="aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.298051 4903 scope.go:117] "RemoveContainer" containerID="69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a" Jan 28 16:06:20 crc kubenswrapper[4903]: E0128 16:06:20.298576 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a\": container with ID starting with 69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a not found: ID does not exist" containerID="69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.298629 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a"} err="failed to get container status \"69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a\": rpc error: code = NotFound desc = could not find container \"69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a\": container with ID starting with 69e04b58796955694690accd00f3dbae17e5a947074e219e91c1cb8a1b3cb87a not found: ID does not exist" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.298689 4903 scope.go:117] "RemoveContainer" containerID="aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe" Jan 28 16:06:20 crc kubenswrapper[4903]: E0128 16:06:20.298975 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe\": container with ID starting with aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe not found: ID does not exist" containerID="aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.299011 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe"} err="failed to get container status \"aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe\": rpc error: code = NotFound desc = could not find container \"aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe\": container with ID starting with aacc6fe7542ab5fac788bec676901e910f8c4c580cfd5a7a05ac6826cbf5d3fe not found: ID does not exist" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.438331 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" path="/var/lib/kubelet/pods/9ec50878-cd94-43f7-a0ee-750e2f0ffc95/volumes" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.527660 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58774fdb8b-5j5kb"] Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.536064 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58774fdb8b-5j5kb"] Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544120 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-b7g78"] Jan 28 16:06:20 crc kubenswrapper[4903]: E0128 16:06:20.544491 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee91865-9bfc-44d2-a0e3-87a4b309ad7e" containerName="cinder-db-sync" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544507 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee91865-9bfc-44d2-a0e3-87a4b309ad7e" containerName="cinder-db-sync" Jan 28 16:06:20 crc kubenswrapper[4903]: E0128 16:06:20.544545 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544552 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api" Jan 28 16:06:20 crc kubenswrapper[4903]: E0128 16:06:20.544569 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerName="dnsmasq-dns" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544577 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerName="dnsmasq-dns" Jan 28 16:06:20 crc kubenswrapper[4903]: E0128 16:06:20.544592 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerName="init" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544597 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerName="init" Jan 28 16:06:20 crc kubenswrapper[4903]: E0128 16:06:20.544611 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api-log" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544616 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api-log" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544781 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api-log" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544792 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec50878-cd94-43f7-a0ee-750e2f0ffc95" containerName="dnsmasq-dns" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544818 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" containerName="barbican-api" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.544834 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee91865-9bfc-44d2-a0e3-87a4b309ad7e" containerName="cinder-db-sync" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.545825 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.556771 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-b7g78"] Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.590875 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.592276 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.597371 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.597999 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kzwm2" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.598227 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.598326 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.608638 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683127 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmvn\" (UniqueName: \"kubernetes.io/projected/d85baae7-8974-44a7-801e-603564209257-kube-api-access-cgmvn\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683191 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683217 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683280 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-scripts\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683321 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d85baae7-8974-44a7-801e-603564209257-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683348 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683389 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683422 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-config\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683444 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683474 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683494 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.683514 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vn2\" (UniqueName: \"kubernetes.io/projected/5885ed8d-0267-41a4-9c88-e9be0091674c-kube-api-access-c6vn2\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.740230 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.741825 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.748748 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.761354 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787211 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787278 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787383 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-scripts\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787439 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d85baae7-8974-44a7-801e-603564209257-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787481 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787576 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787629 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-config\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787664 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787701 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787731 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787757 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vn2\" (UniqueName: \"kubernetes.io/projected/5885ed8d-0267-41a4-9c88-e9be0091674c-kube-api-access-c6vn2\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.787918 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmvn\" (UniqueName: \"kubernetes.io/projected/d85baae7-8974-44a7-801e-603564209257-kube-api-access-cgmvn\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.789396 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.796894 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d85baae7-8974-44a7-801e-603564209257-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.798457 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.802169 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.802328 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-config\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.803417 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.804143 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.807713 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-scripts\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.808264 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.817195 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.834382 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmvn\" (UniqueName: \"kubernetes.io/projected/d85baae7-8974-44a7-801e-603564209257-kube-api-access-cgmvn\") pod \"cinder-scheduler-0\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.841743 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vn2\" (UniqueName: \"kubernetes.io/projected/5885ed8d-0267-41a4-9c88-e9be0091674c-kube-api-access-c6vn2\") pod \"dnsmasq-dns-75bfc9b94f-b7g78\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.872116 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.889013 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace134af-67ed-4436-9a00-cd4f22afaf4d-logs\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.889283 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-scripts\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.889377 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.889453 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.889567 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cfvx\" (UniqueName: \"kubernetes.io/projected/ace134af-67ed-4436-9a00-cd4f22afaf4d-kube-api-access-6cfvx\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.889688 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.889774 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace134af-67ed-4436-9a00-cd4f22afaf4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.918066 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.992259 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-scripts\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.992323 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.992351 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.992405 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cfvx\" (UniqueName: \"kubernetes.io/projected/ace134af-67ed-4436-9a00-cd4f22afaf4d-kube-api-access-6cfvx\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.992456 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.992492 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace134af-67ed-4436-9a00-cd4f22afaf4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.992580 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace134af-67ed-4436-9a00-cd4f22afaf4d-logs\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.993416 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace134af-67ed-4436-9a00-cd4f22afaf4d-logs\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:20 crc kubenswrapper[4903]: I0128 16:06:20.996451 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace134af-67ed-4436-9a00-cd4f22afaf4d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.004260 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.004742 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data-custom\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.005558 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.017068 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-scripts\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.023320 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cfvx\" (UniqueName: \"kubernetes.io/projected/ace134af-67ed-4436-9a00-cd4f22afaf4d-kube-api-access-6cfvx\") pod \"cinder-api-0\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " pod="openstack/cinder-api-0" Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.067275 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.458440 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-b7g78"] Jan 28 16:06:21 crc kubenswrapper[4903]: W0128 16:06:21.463041 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5885ed8d_0267_41a4_9c88_e9be0091674c.slice/crio-26ac117d8eb8a3634aa9a14d3927190f6dbeab1e403a53fe85c245c9bc475d81 WatchSource:0}: Error finding container 26ac117d8eb8a3634aa9a14d3927190f6dbeab1e403a53fe85c245c9bc475d81: Status 404 returned error can't find the container with id 26ac117d8eb8a3634aa9a14d3927190f6dbeab1e403a53fe85c245c9bc475d81 Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.563190 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:21 crc kubenswrapper[4903]: I0128 16:06:21.647731 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:22 crc kubenswrapper[4903]: I0128 16:06:22.219773 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace134af-67ed-4436-9a00-cd4f22afaf4d","Type":"ContainerStarted","Data":"c9dc3ac1cca7f63fc65142cea9e90d71380ff04376c58d3088fb7ed2f8e6a138"} Jan 28 16:06:22 crc kubenswrapper[4903]: I0128 16:06:22.221613 4903 generic.go:334] "Generic (PLEG): container finished" podID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerID="60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a" exitCode=0 Jan 28 16:06:22 crc kubenswrapper[4903]: I0128 16:06:22.221707 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" event={"ID":"5885ed8d-0267-41a4-9c88-e9be0091674c","Type":"ContainerDied","Data":"60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a"} Jan 28 16:06:22 crc kubenswrapper[4903]: I0128 16:06:22.221744 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" event={"ID":"5885ed8d-0267-41a4-9c88-e9be0091674c","Type":"ContainerStarted","Data":"26ac117d8eb8a3634aa9a14d3927190f6dbeab1e403a53fe85c245c9bc475d81"} Jan 28 16:06:22 crc kubenswrapper[4903]: I0128 16:06:22.222804 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d85baae7-8974-44a7-801e-603564209257","Type":"ContainerStarted","Data":"6c6bda3ed053260cb71bc0feddff966be110a97634b522618f5c649a4032d11b"} Jan 28 16:06:22 crc kubenswrapper[4903]: I0128 16:06:22.428618 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f552c7e-3cf3-40e4-8afd-817b1e46302c" path="/var/lib/kubelet/pods/9f552c7e-3cf3-40e4-8afd-817b1e46302c/volumes" Jan 28 16:06:23 crc kubenswrapper[4903]: I0128 16:06:23.235808 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace134af-67ed-4436-9a00-cd4f22afaf4d","Type":"ContainerStarted","Data":"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d"} Jan 28 16:06:23 crc kubenswrapper[4903]: I0128 16:06:23.236144 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace134af-67ed-4436-9a00-cd4f22afaf4d","Type":"ContainerStarted","Data":"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b"} Jan 28 16:06:23 crc kubenswrapper[4903]: I0128 16:06:23.239462 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" event={"ID":"5885ed8d-0267-41a4-9c88-e9be0091674c","Type":"ContainerStarted","Data":"d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe"} Jan 28 16:06:23 crc kubenswrapper[4903]: I0128 16:06:23.239684 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:23 crc kubenswrapper[4903]: I0128 16:06:23.240198 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:24 crc kubenswrapper[4903]: I0128 16:06:24.249835 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d85baae7-8974-44a7-801e-603564209257","Type":"ContainerStarted","Data":"024d3bdbeba1bc44e932d1f8cb7ec55fe511e0b630cb9443a8245139e021e6a3"} Jan 28 16:06:24 crc kubenswrapper[4903]: I0128 16:06:24.250217 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api-log" containerID="cri-o://431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b" gracePeriod=30 Jan 28 16:06:24 crc kubenswrapper[4903]: I0128 16:06:24.250247 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api" containerID="cri-o://83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d" gracePeriod=30 Jan 28 16:06:24 crc kubenswrapper[4903]: I0128 16:06:24.281609 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" podStartSLOduration=4.281592313 podStartE2EDuration="4.281592313s" podCreationTimestamp="2026-01-28 16:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:23.25486214 +0000 UTC m=+1255.530833671" watchObservedRunningTime="2026-01-28 16:06:24.281592313 +0000 UTC m=+1256.557563824" Jan 28 16:06:24 crc kubenswrapper[4903]: I0128 16:06:24.282323 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.282317363 podStartE2EDuration="4.282317363s" podCreationTimestamp="2026-01-28 16:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:24.279697761 +0000 UTC m=+1256.555669272" watchObservedRunningTime="2026-01-28 16:06:24.282317363 +0000 UTC m=+1256.558288874" Jan 28 16:06:24 crc kubenswrapper[4903]: I0128 16:06:24.920056 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.062887 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-scripts\") pod \"ace134af-67ed-4436-9a00-cd4f22afaf4d\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.062948 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace134af-67ed-4436-9a00-cd4f22afaf4d-etc-machine-id\") pod \"ace134af-67ed-4436-9a00-cd4f22afaf4d\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.063041 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace134af-67ed-4436-9a00-cd4f22afaf4d-logs\") pod \"ace134af-67ed-4436-9a00-cd4f22afaf4d\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.063077 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cfvx\" (UniqueName: \"kubernetes.io/projected/ace134af-67ed-4436-9a00-cd4f22afaf4d-kube-api-access-6cfvx\") pod \"ace134af-67ed-4436-9a00-cd4f22afaf4d\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.063172 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data-custom\") pod \"ace134af-67ed-4436-9a00-cd4f22afaf4d\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.063262 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-combined-ca-bundle\") pod \"ace134af-67ed-4436-9a00-cd4f22afaf4d\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.063291 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data\") pod \"ace134af-67ed-4436-9a00-cd4f22afaf4d\" (UID: \"ace134af-67ed-4436-9a00-cd4f22afaf4d\") " Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.064811 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace134af-67ed-4436-9a00-cd4f22afaf4d-logs" (OuterVolumeSpecName: "logs") pod "ace134af-67ed-4436-9a00-cd4f22afaf4d" (UID: "ace134af-67ed-4436-9a00-cd4f22afaf4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.065001 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ace134af-67ed-4436-9a00-cd4f22afaf4d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ace134af-67ed-4436-9a00-cd4f22afaf4d" (UID: "ace134af-67ed-4436-9a00-cd4f22afaf4d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.069844 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace134af-67ed-4436-9a00-cd4f22afaf4d-kube-api-access-6cfvx" (OuterVolumeSpecName: "kube-api-access-6cfvx") pod "ace134af-67ed-4436-9a00-cd4f22afaf4d" (UID: "ace134af-67ed-4436-9a00-cd4f22afaf4d"). InnerVolumeSpecName "kube-api-access-6cfvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.070441 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-scripts" (OuterVolumeSpecName: "scripts") pod "ace134af-67ed-4436-9a00-cd4f22afaf4d" (UID: "ace134af-67ed-4436-9a00-cd4f22afaf4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.070762 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ace134af-67ed-4436-9a00-cd4f22afaf4d" (UID: "ace134af-67ed-4436-9a00-cd4f22afaf4d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.090135 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ace134af-67ed-4436-9a00-cd4f22afaf4d" (UID: "ace134af-67ed-4436-9a00-cd4f22afaf4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.119938 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data" (OuterVolumeSpecName: "config-data") pod "ace134af-67ed-4436-9a00-cd4f22afaf4d" (UID: "ace134af-67ed-4436-9a00-cd4f22afaf4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.164967 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ace134af-67ed-4436-9a00-cd4f22afaf4d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.165019 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.165034 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace134af-67ed-4436-9a00-cd4f22afaf4d-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.165046 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cfvx\" (UniqueName: \"kubernetes.io/projected/ace134af-67ed-4436-9a00-cd4f22afaf4d-kube-api-access-6cfvx\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.165060 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.165072 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.165084 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace134af-67ed-4436-9a00-cd4f22afaf4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.263355 4903 generic.go:334] "Generic (PLEG): container finished" podID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerID="83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d" exitCode=0 Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.263393 4903 generic.go:334] "Generic (PLEG): container finished" podID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerID="431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b" exitCode=143 Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.263448 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.263426 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace134af-67ed-4436-9a00-cd4f22afaf4d","Type":"ContainerDied","Data":"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d"} Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.263553 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace134af-67ed-4436-9a00-cd4f22afaf4d","Type":"ContainerDied","Data":"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b"} Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.263573 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ace134af-67ed-4436-9a00-cd4f22afaf4d","Type":"ContainerDied","Data":"c9dc3ac1cca7f63fc65142cea9e90d71380ff04376c58d3088fb7ed2f8e6a138"} Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.263620 4903 scope.go:117] "RemoveContainer" containerID="83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.275997 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d85baae7-8974-44a7-801e-603564209257","Type":"ContainerStarted","Data":"0eb488437866d5a68733bab536b7c0b2c159565f79d05088c2ae05f79f837c48"} Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.300678 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.321854476 podStartE2EDuration="5.300656197s" podCreationTimestamp="2026-01-28 16:06:20 +0000 UTC" firstStartedPulling="2026-01-28 16:06:21.571579953 +0000 UTC m=+1253.847551464" lastFinishedPulling="2026-01-28 16:06:23.550381674 +0000 UTC m=+1255.826353185" observedRunningTime="2026-01-28 16:06:25.296329219 +0000 UTC m=+1257.572300750" watchObservedRunningTime="2026-01-28 16:06:25.300656197 +0000 UTC m=+1257.576627708" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.324021 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.335328 4903 scope.go:117] "RemoveContainer" containerID="431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.351695 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.375361 4903 scope.go:117] "RemoveContainer" containerID="83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d" Jan 28 16:06:25 crc kubenswrapper[4903]: E0128 16:06:25.378124 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d\": container with ID starting with 83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d not found: ID does not exist" containerID="83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.378158 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d"} err="failed to get container status \"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d\": rpc error: code = NotFound desc = could not find container \"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d\": container with ID starting with 83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d not found: ID does not exist" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.378178 4903 scope.go:117] "RemoveContainer" containerID="431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b" Jan 28 16:06:25 crc kubenswrapper[4903]: E0128 16:06:25.378481 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b\": container with ID starting with 431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b not found: ID does not exist" containerID="431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.378501 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b"} err="failed to get container status \"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b\": rpc error: code = NotFound desc = could not find container \"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b\": container with ID starting with 431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b not found: ID does not exist" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.378515 4903 scope.go:117] "RemoveContainer" containerID="83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.378785 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d"} err="failed to get container status \"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d\": rpc error: code = NotFound desc = could not find container \"83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d\": container with ID starting with 83d86b0ee3ccf88bcff52fbdb620273580a3ae7ce67638b3175f5ede12ef2f6d not found: ID does not exist" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.378808 4903 scope.go:117] "RemoveContainer" containerID="431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.378981 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b"} err="failed to get container status \"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b\": rpc error: code = NotFound desc = could not find container \"431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b\": container with ID starting with 431e01aa1b702d89d9fdb4cf84b47d053ac2c6cbea60a1706bc66302f6c0b82b not found: ID does not exist" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.394652 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:25 crc kubenswrapper[4903]: E0128 16:06:25.395016 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.395029 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api" Jan 28 16:06:25 crc kubenswrapper[4903]: E0128 16:06:25.395066 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api-log" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.395073 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api-log" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.395228 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.395244 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" containerName="cinder-api-log" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.396205 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.400414 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.400590 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.400590 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.420596 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471240 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471297 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471320 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471356 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/033b894a-46ce-4bd8-b97c-312c8b7c90dd-logs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471588 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/033b894a-46ce-4bd8-b97c-312c8b7c90dd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471900 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data-custom\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471941 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.471969 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cz8b\" (UniqueName: \"kubernetes.io/projected/033b894a-46ce-4bd8-b97c-312c8b7c90dd-kube-api-access-9cz8b\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.472070 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-scripts\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.543486 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573496 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-scripts\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573571 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573614 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573639 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573682 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/033b894a-46ce-4bd8-b97c-312c8b7c90dd-logs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573729 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/033b894a-46ce-4bd8-b97c-312c8b7c90dd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573819 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data-custom\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573838 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.573859 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cz8b\" (UniqueName: \"kubernetes.io/projected/033b894a-46ce-4bd8-b97c-312c8b7c90dd-kube-api-access-9cz8b\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.574269 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/033b894a-46ce-4bd8-b97c-312c8b7c90dd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.574724 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/033b894a-46ce-4bd8-b97c-312c8b7c90dd-logs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.577175 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.578041 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-scripts\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.578701 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.584238 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data-custom\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.584408 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.585796 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.590821 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cz8b\" (UniqueName: \"kubernetes.io/projected/033b894a-46ce-4bd8-b97c-312c8b7c90dd-kube-api-access-9cz8b\") pod \"cinder-api-0\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.721737 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:06:25 crc kubenswrapper[4903]: I0128 16:06:25.918727 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 16:06:26 crc kubenswrapper[4903]: I0128 16:06:26.132774 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:06:26 crc kubenswrapper[4903]: I0128 16:06:26.285888 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"033b894a-46ce-4bd8-b97c-312c8b7c90dd","Type":"ContainerStarted","Data":"54c34f0381bdb2bdbed9efb44ef91575724d291b31882420c8bd36b933ea7a12"} Jan 28 16:06:26 crc kubenswrapper[4903]: I0128 16:06:26.429647 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace134af-67ed-4436-9a00-cd4f22afaf4d" path="/var/lib/kubelet/pods/ace134af-67ed-4436-9a00-cd4f22afaf4d/volumes" Jan 28 16:06:26 crc kubenswrapper[4903]: I0128 16:06:26.529181 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:06:26 crc kubenswrapper[4903]: I0128 16:06:26.615312 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:06:26 crc kubenswrapper[4903]: I0128 16:06:26.615378 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:06:27 crc kubenswrapper[4903]: I0128 16:06:27.128976 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:06:27 crc kubenswrapper[4903]: I0128 16:06:27.317908 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"033b894a-46ce-4bd8-b97c-312c8b7c90dd","Type":"ContainerStarted","Data":"e3cac4a8f1fa34db395b4644330439522c368c8649ab045e0d9d216976c0e7ee"} Jan 28 16:06:28 crc kubenswrapper[4903]: I0128 16:06:28.329162 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"033b894a-46ce-4bd8-b97c-312c8b7c90dd","Type":"ContainerStarted","Data":"2cc0c1e09b1d32a98d2dde5eee40318869853a44f68e5250ff8ceb601a48d512"} Jan 28 16:06:28 crc kubenswrapper[4903]: I0128 16:06:28.329893 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 16:06:28 crc kubenswrapper[4903]: I0128 16:06:28.357064 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.357039717 podStartE2EDuration="3.357039717s" podCreationTimestamp="2026-01-28 16:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:28.346371977 +0000 UTC m=+1260.622343508" watchObservedRunningTime="2026-01-28 16:06:28.357039717 +0000 UTC m=+1260.633011228" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.394863 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-867d8c4cc5-vz4lw"] Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.412769 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.420105 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.424517 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.424901 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.450117 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-867d8c4cc5-vz4lw"] Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.474182 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-combined-ca-bundle\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.474506 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-log-httpd\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.474636 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-internal-tls-certs\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.474782 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-public-tls-certs\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.474955 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-config-data\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.475168 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-run-httpd\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.475294 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcb9x\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-kube-api-access-qcb9x\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.475435 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-etc-swift\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.577287 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-combined-ca-bundle\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.577357 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-log-httpd\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.577377 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-internal-tls-certs\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.577410 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-public-tls-certs\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.577966 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-log-httpd\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.578267 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-config-data\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.578365 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-run-httpd\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.578386 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcb9x\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-kube-api-access-qcb9x\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.578416 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-etc-swift\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.578622 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-run-httpd\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.595864 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-etc-swift\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.596120 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-combined-ca-bundle\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.598108 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-config-data\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.598689 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-public-tls-certs\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.599596 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcb9x\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-kube-api-access-qcb9x\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.600667 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-internal-tls-certs\") pod \"swift-proxy-867d8c4cc5-vz4lw\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.757700 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.874780 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.952574 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-8jk6g"] Jan 28 16:06:30 crc kubenswrapper[4903]: I0128 16:06:30.952812 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" podUID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerName="dnsmasq-dns" containerID="cri-o://3ab473d6bf7c4e850de97bffd626c7cfcf68edd83030794c39f1d2dbacd3494f" gracePeriod=10 Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.210895 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.288491 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.347353 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-867d8c4cc5-vz4lw"] Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.422778 4903 generic.go:334] "Generic (PLEG): container finished" podID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerID="3ab473d6bf7c4e850de97bffd626c7cfcf68edd83030794c39f1d2dbacd3494f" exitCode=0 Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.422865 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" event={"ID":"691f7d2f-fc86-4b14-b6c9-2799a4b384e2","Type":"ContainerDied","Data":"3ab473d6bf7c4e850de97bffd626c7cfcf68edd83030794c39f1d2dbacd3494f"} Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.424101 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="cinder-scheduler" containerID="cri-o://024d3bdbeba1bc44e932d1f8cb7ec55fe511e0b630cb9443a8245139e021e6a3" gracePeriod=30 Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.424390 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" event={"ID":"bf32204d-973f-4397-8fbe-8b155f1f6f52","Type":"ContainerStarted","Data":"e8432814af98cdd38786133fdb7e2fcd90313e16de2dcdc3be05676c6460116e"} Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.424720 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="probe" containerID="cri-o://0eb488437866d5a68733bab536b7c0b2c159565f79d05088c2ae05f79f837c48" gracePeriod=30 Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.548884 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.600368 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqxbw\" (UniqueName: \"kubernetes.io/projected/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-kube-api-access-kqxbw\") pod \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.600413 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-svc\") pod \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.600446 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-swift-storage-0\") pod \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.600685 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-nb\") pod \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.600702 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-sb\") pod \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.600725 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-config\") pod \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\" (UID: \"691f7d2f-fc86-4b14-b6c9-2799a4b384e2\") " Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.604987 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-kube-api-access-kqxbw" (OuterVolumeSpecName: "kube-api-access-kqxbw") pod "691f7d2f-fc86-4b14-b6c9-2799a4b384e2" (UID: "691f7d2f-fc86-4b14-b6c9-2799a4b384e2"). InnerVolumeSpecName "kube-api-access-kqxbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.677744 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "691f7d2f-fc86-4b14-b6c9-2799a4b384e2" (UID: "691f7d2f-fc86-4b14-b6c9-2799a4b384e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.684515 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "691f7d2f-fc86-4b14-b6c9-2799a4b384e2" (UID: "691f7d2f-fc86-4b14-b6c9-2799a4b384e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.686119 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-config" (OuterVolumeSpecName: "config") pod "691f7d2f-fc86-4b14-b6c9-2799a4b384e2" (UID: "691f7d2f-fc86-4b14-b6c9-2799a4b384e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.689595 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "691f7d2f-fc86-4b14-b6c9-2799a4b384e2" (UID: "691f7d2f-fc86-4b14-b6c9-2799a4b384e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.703431 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.703470 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.703483 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.703496 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.703510 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqxbw\" (UniqueName: \"kubernetes.io/projected/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-kube-api-access-kqxbw\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.704627 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "691f7d2f-fc86-4b14-b6c9-2799a4b384e2" (UID: "691f7d2f-fc86-4b14-b6c9-2799a4b384e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.745865 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 16:06:31 crc kubenswrapper[4903]: E0128 16:06:31.746360 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerName="init" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.746380 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerName="init" Jan 28 16:06:31 crc kubenswrapper[4903]: E0128 16:06:31.746393 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerName="dnsmasq-dns" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.746401 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerName="dnsmasq-dns" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.746642 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" containerName="dnsmasq-dns" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.747395 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.750686 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.750996 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9zhzk" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.755338 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.762136 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.805823 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config-secret\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.805963 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.805998 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnvr\" (UniqueName: \"kubernetes.io/projected/e1ce53ab-7d85-47b9-a886-162ef3726997-kube-api-access-4dnvr\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.806184 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.806269 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/691f7d2f-fc86-4b14-b6c9-2799a4b384e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.908036 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.908103 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnvr\" (UniqueName: \"kubernetes.io/projected/e1ce53ab-7d85-47b9-a886-162ef3726997-kube-api-access-4dnvr\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.908178 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.908279 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config-secret\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.917095 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.917419 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config-secret\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.918819 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:31 crc kubenswrapper[4903]: I0128 16:06:31.931166 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnvr\" (UniqueName: \"kubernetes.io/projected/e1ce53ab-7d85-47b9-a886-162ef3726997-kube-api-access-4dnvr\") pod \"openstackclient\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " pod="openstack/openstackclient" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.192853 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.435992 4903 generic.go:334] "Generic (PLEG): container finished" podID="d85baae7-8974-44a7-801e-603564209257" containerID="0eb488437866d5a68733bab536b7c0b2c159565f79d05088c2ae05f79f837c48" exitCode=0 Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.436407 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d85baae7-8974-44a7-801e-603564209257","Type":"ContainerDied","Data":"0eb488437866d5a68733bab536b7c0b2c159565f79d05088c2ae05f79f837c48"} Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.439489 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" event={"ID":"691f7d2f-fc86-4b14-b6c9-2799a4b384e2","Type":"ContainerDied","Data":"105a3714f92b036a62de03d1ddcfda814e91804ff990dd3cfc9483b871638523"} Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.439561 4903 scope.go:117] "RemoveContainer" containerID="3ab473d6bf7c4e850de97bffd626c7cfcf68edd83030794c39f1d2dbacd3494f" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.439761 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-8jk6g" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.444290 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" event={"ID":"bf32204d-973f-4397-8fbe-8b155f1f6f52","Type":"ContainerStarted","Data":"d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4"} Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.444360 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" event={"ID":"bf32204d-973f-4397-8fbe-8b155f1f6f52","Type":"ContainerStarted","Data":"7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2"} Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.444571 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.444612 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.472024 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-8jk6g"] Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.472462 4903 scope.go:117] "RemoveContainer" containerID="e457863635d09b126b806ff8bb8af8825bbebcfbe9ef6d06ad4336fbc3bd8a67" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.483367 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-8jk6g"] Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.496814 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" podStartSLOduration=2.496794344 podStartE2EDuration="2.496794344s" podCreationTimestamp="2026-01-28 16:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:32.490725209 +0000 UTC m=+1264.766696720" watchObservedRunningTime="2026-01-28 16:06:32.496794344 +0000 UTC m=+1264.772765855" Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.647346 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.647865 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-central-agent" containerID="cri-o://88f6a681447d9ba825540a06af50abf1bd04892418ed932aad1a8245843100e9" gracePeriod=30 Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.647973 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-notification-agent" containerID="cri-o://0af711fbec9350590df98e28a67d803dda37f2df8c8baafe45cedb6b953dc632" gracePeriod=30 Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.647981 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="sg-core" containerID="cri-o://a687bd805af1065d8802f7bf7931b2436e411acdef4e7541da621edeb435e77e" gracePeriod=30 Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.647981 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="proxy-httpd" containerID="cri-o://c2f0248a0c24bd29c5e5b3a6141300bb0fdcbe281e614df5506ce6bd943af92d" gracePeriod=30 Jan 28 16:06:32 crc kubenswrapper[4903]: W0128 16:06:32.678299 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1ce53ab_7d85_47b9_a886_162ef3726997.slice/crio-d863db255f7c421de30e9aeda54ee1cb5d3ec6eab66c9d3e0c9a4cdb4c4aa27b WatchSource:0}: Error finding container d863db255f7c421de30e9aeda54ee1cb5d3ec6eab66c9d3e0c9a4cdb4c4aa27b: Status 404 returned error can't find the container with id d863db255f7c421de30e9aeda54ee1cb5d3ec6eab66c9d3e0c9a4cdb4c4aa27b Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.679256 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 16:06:32 crc kubenswrapper[4903]: I0128 16:06:32.684452 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.160:3000/\": EOF" Jan 28 16:06:33 crc kubenswrapper[4903]: I0128 16:06:33.458144 4903 generic.go:334] "Generic (PLEG): container finished" podID="cae03240-8c2e-463e-a674-10c21514d9cd" containerID="c2f0248a0c24bd29c5e5b3a6141300bb0fdcbe281e614df5506ce6bd943af92d" exitCode=0 Jan 28 16:06:33 crc kubenswrapper[4903]: I0128 16:06:33.458185 4903 generic.go:334] "Generic (PLEG): container finished" podID="cae03240-8c2e-463e-a674-10c21514d9cd" containerID="a687bd805af1065d8802f7bf7931b2436e411acdef4e7541da621edeb435e77e" exitCode=2 Jan 28 16:06:33 crc kubenswrapper[4903]: I0128 16:06:33.458198 4903 generic.go:334] "Generic (PLEG): container finished" podID="cae03240-8c2e-463e-a674-10c21514d9cd" containerID="88f6a681447d9ba825540a06af50abf1bd04892418ed932aad1a8245843100e9" exitCode=0 Jan 28 16:06:33 crc kubenswrapper[4903]: I0128 16:06:33.458217 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerDied","Data":"c2f0248a0c24bd29c5e5b3a6141300bb0fdcbe281e614df5506ce6bd943af92d"} Jan 28 16:06:33 crc kubenswrapper[4903]: I0128 16:06:33.458276 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerDied","Data":"a687bd805af1065d8802f7bf7931b2436e411acdef4e7541da621edeb435e77e"} Jan 28 16:06:33 crc kubenswrapper[4903]: I0128 16:06:33.458289 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerDied","Data":"88f6a681447d9ba825540a06af50abf1bd04892418ed932aad1a8245843100e9"} Jan 28 16:06:33 crc kubenswrapper[4903]: I0128 16:06:33.461293 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e1ce53ab-7d85-47b9-a886-162ef3726997","Type":"ContainerStarted","Data":"d863db255f7c421de30e9aeda54ee1cb5d3ec6eab66c9d3e0c9a4cdb4c4aa27b"} Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.429553 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691f7d2f-fc86-4b14-b6c9-2799a4b384e2" path="/var/lib/kubelet/pods/691f7d2f-fc86-4b14-b6c9-2799a4b384e2/volumes" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.475514 4903 generic.go:334] "Generic (PLEG): container finished" podID="cae03240-8c2e-463e-a674-10c21514d9cd" containerID="0af711fbec9350590df98e28a67d803dda37f2df8c8baafe45cedb6b953dc632" exitCode=0 Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.475581 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerDied","Data":"0af711fbec9350590df98e28a67d803dda37f2df8c8baafe45cedb6b953dc632"} Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.798640 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.868113 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpppl\" (UniqueName: \"kubernetes.io/projected/cae03240-8c2e-463e-a674-10c21514d9cd-kube-api-access-bpppl\") pod \"cae03240-8c2e-463e-a674-10c21514d9cd\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.868209 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-scripts\") pod \"cae03240-8c2e-463e-a674-10c21514d9cd\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.868321 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-log-httpd\") pod \"cae03240-8c2e-463e-a674-10c21514d9cd\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.868364 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-combined-ca-bundle\") pod \"cae03240-8c2e-463e-a674-10c21514d9cd\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.868389 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-run-httpd\") pod \"cae03240-8c2e-463e-a674-10c21514d9cd\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.869149 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-sg-core-conf-yaml\") pod \"cae03240-8c2e-463e-a674-10c21514d9cd\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.869248 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-config-data\") pod \"cae03240-8c2e-463e-a674-10c21514d9cd\" (UID: \"cae03240-8c2e-463e-a674-10c21514d9cd\") " Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.869357 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cae03240-8c2e-463e-a674-10c21514d9cd" (UID: "cae03240-8c2e-463e-a674-10c21514d9cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.869609 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cae03240-8c2e-463e-a674-10c21514d9cd" (UID: "cae03240-8c2e-463e-a674-10c21514d9cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.869753 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.869768 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae03240-8c2e-463e-a674-10c21514d9cd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.878344 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae03240-8c2e-463e-a674-10c21514d9cd-kube-api-access-bpppl" (OuterVolumeSpecName: "kube-api-access-bpppl") pod "cae03240-8c2e-463e-a674-10c21514d9cd" (UID: "cae03240-8c2e-463e-a674-10c21514d9cd"). InnerVolumeSpecName "kube-api-access-bpppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.886650 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-scripts" (OuterVolumeSpecName: "scripts") pod "cae03240-8c2e-463e-a674-10c21514d9cd" (UID: "cae03240-8c2e-463e-a674-10c21514d9cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.934810 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cae03240-8c2e-463e-a674-10c21514d9cd" (UID: "cae03240-8c2e-463e-a674-10c21514d9cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.971312 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpppl\" (UniqueName: \"kubernetes.io/projected/cae03240-8c2e-463e-a674-10c21514d9cd-kube-api-access-bpppl\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.971339 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.971348 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:34 crc kubenswrapper[4903]: I0128 16:06:34.984340 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae03240-8c2e-463e-a674-10c21514d9cd" (UID: "cae03240-8c2e-463e-a674-10c21514d9cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.027389 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-config-data" (OuterVolumeSpecName: "config-data") pod "cae03240-8c2e-463e-a674-10c21514d9cd" (UID: "cae03240-8c2e-463e-a674-10c21514d9cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.072806 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.072848 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae03240-8c2e-463e-a674-10c21514d9cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.496725 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae03240-8c2e-463e-a674-10c21514d9cd","Type":"ContainerDied","Data":"985e89a7f4b3c100b028adff26e257288c6f56563544d67de1b5f7af3e79d7ed"} Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.496789 4903 scope.go:117] "RemoveContainer" containerID="c2f0248a0c24bd29c5e5b3a6141300bb0fdcbe281e614df5506ce6bd943af92d" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.496818 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.511253 4903 generic.go:334] "Generic (PLEG): container finished" podID="d85baae7-8974-44a7-801e-603564209257" containerID="024d3bdbeba1bc44e932d1f8cb7ec55fe511e0b630cb9443a8245139e021e6a3" exitCode=0 Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.511635 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d85baae7-8974-44a7-801e-603564209257","Type":"ContainerDied","Data":"024d3bdbeba1bc44e932d1f8cb7ec55fe511e0b630cb9443a8245139e021e6a3"} Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.551348 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.561858 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.563410 4903 scope.go:117] "RemoveContainer" containerID="a687bd805af1065d8802f7bf7931b2436e411acdef4e7541da621edeb435e77e" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.587825 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:35 crc kubenswrapper[4903]: E0128 16:06:35.588584 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-notification-agent" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.588608 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-notification-agent" Jan 28 16:06:35 crc kubenswrapper[4903]: E0128 16:06:35.588653 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="sg-core" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.588662 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="sg-core" Jan 28 16:06:35 crc kubenswrapper[4903]: E0128 16:06:35.588671 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="proxy-httpd" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.588680 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="proxy-httpd" Jan 28 16:06:35 crc kubenswrapper[4903]: E0128 16:06:35.588710 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-central-agent" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.588718 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-central-agent" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.589291 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-notification-agent" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.589330 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="proxy-httpd" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.589348 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="sg-core" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.589360 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" containerName="ceilometer-central-agent" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.591464 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.594579 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.595040 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.595430 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.626608 4903 scope.go:117] "RemoveContainer" containerID="0af711fbec9350590df98e28a67d803dda37f2df8c8baafe45cedb6b953dc632" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.649288 4903 scope.go:117] "RemoveContainer" containerID="88f6a681447d9ba825540a06af50abf1bd04892418ed932aad1a8245843100e9" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.686260 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-scripts\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.686512 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsz4l\" (UniqueName: \"kubernetes.io/projected/bfff7b8a-803b-4945-90a3-d135faedfe34-kube-api-access-dsz4l\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.686601 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.686719 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-config-data\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.686758 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-run-httpd\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.686814 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-log-httpd\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.686835 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.788497 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-config-data\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.788571 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-run-httpd\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.788599 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-log-httpd\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.788622 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.788705 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-scripts\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.788808 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsz4l\" (UniqueName: \"kubernetes.io/projected/bfff7b8a-803b-4945-90a3-d135faedfe34-kube-api-access-dsz4l\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.788846 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.789557 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-run-httpd\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.789960 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-log-httpd\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.802932 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.804457 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-scripts\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.807047 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-config-data\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.807328 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.810977 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsz4l\" (UniqueName: \"kubernetes.io/projected/bfff7b8a-803b-4945-90a3-d135faedfe34-kube-api-access-dsz4l\") pod \"ceilometer-0\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.898036 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.928107 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.991281 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d85baae7-8974-44a7-801e-603564209257-etc-machine-id\") pod \"d85baae7-8974-44a7-801e-603564209257\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.991418 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data\") pod \"d85baae7-8974-44a7-801e-603564209257\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.991630 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-scripts\") pod \"d85baae7-8974-44a7-801e-603564209257\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.991514 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d85baae7-8974-44a7-801e-603564209257-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d85baae7-8974-44a7-801e-603564209257" (UID: "d85baae7-8974-44a7-801e-603564209257"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.991662 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-combined-ca-bundle\") pod \"d85baae7-8974-44a7-801e-603564209257\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.991716 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data-custom\") pod \"d85baae7-8974-44a7-801e-603564209257\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.991804 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmvn\" (UniqueName: \"kubernetes.io/projected/d85baae7-8974-44a7-801e-603564209257-kube-api-access-cgmvn\") pod \"d85baae7-8974-44a7-801e-603564209257\" (UID: \"d85baae7-8974-44a7-801e-603564209257\") " Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.992758 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d85baae7-8974-44a7-801e-603564209257-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:35 crc kubenswrapper[4903]: I0128 16:06:35.999075 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d85baae7-8974-44a7-801e-603564209257" (UID: "d85baae7-8974-44a7-801e-603564209257"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.006806 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-scripts" (OuterVolumeSpecName: "scripts") pod "d85baae7-8974-44a7-801e-603564209257" (UID: "d85baae7-8974-44a7-801e-603564209257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.007941 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85baae7-8974-44a7-801e-603564209257-kube-api-access-cgmvn" (OuterVolumeSpecName: "kube-api-access-cgmvn") pod "d85baae7-8974-44a7-801e-603564209257" (UID: "d85baae7-8974-44a7-801e-603564209257"). InnerVolumeSpecName "kube-api-access-cgmvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.094159 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmvn\" (UniqueName: \"kubernetes.io/projected/d85baae7-8974-44a7-801e-603564209257-kube-api-access-cgmvn\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.094190 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.094204 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.095655 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d85baae7-8974-44a7-801e-603564209257" (UID: "d85baae7-8974-44a7-801e-603564209257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.207091 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data" (OuterVolumeSpecName: "config-data") pod "d85baae7-8974-44a7-801e-603564209257" (UID: "d85baae7-8974-44a7-801e-603564209257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.211779 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.211817 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85baae7-8974-44a7-801e-603564209257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.425243 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae03240-8c2e-463e-a674-10c21514d9cd" path="/var/lib/kubelet/pods/cae03240-8c2e-463e-a674-10c21514d9cd/volumes" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.477389 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.524114 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d85baae7-8974-44a7-801e-603564209257","Type":"ContainerDied","Data":"6c6bda3ed053260cb71bc0feddff966be110a97634b522618f5c649a4032d11b"} Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.524177 4903 scope.go:117] "RemoveContainer" containerID="0eb488437866d5a68733bab536b7c0b2c159565f79d05088c2ae05f79f837c48" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.524282 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.528638 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerStarted","Data":"74f659ed6621b3a9e2851e7dca7cee033009e7a314cc9727ce25aa8d1ec2d9a7"} Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.552478 4903 scope.go:117] "RemoveContainer" containerID="024d3bdbeba1bc44e932d1f8cb7ec55fe511e0b630cb9443a8245139e021e6a3" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.560621 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.576317 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.595702 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:36 crc kubenswrapper[4903]: E0128 16:06:36.596120 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="probe" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.596136 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="probe" Jan 28 16:06:36 crc kubenswrapper[4903]: E0128 16:06:36.596160 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="cinder-scheduler" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.596167 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="cinder-scheduler" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.596343 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="probe" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.596362 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85baae7-8974-44a7-801e-603564209257" containerName="cinder-scheduler" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.597383 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.601414 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.607347 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.620392 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-scripts\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.620453 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.622470 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhs4\" (UniqueName: \"kubernetes.io/projected/967fdf30-3d73-4e3f-9056-e270e10d3213-kube-api-access-snhs4\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.622617 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.622648 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.622684 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/967fdf30-3d73-4e3f-9056-e270e10d3213-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.724995 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/967fdf30-3d73-4e3f-9056-e270e10d3213-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.725136 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-scripts\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.725172 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/967fdf30-3d73-4e3f-9056-e270e10d3213-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.725202 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.725294 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snhs4\" (UniqueName: \"kubernetes.io/projected/967fdf30-3d73-4e3f-9056-e270e10d3213-kube-api-access-snhs4\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.725504 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.725567 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.731126 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-scripts\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.734919 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.737134 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.744186 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.746213 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snhs4\" (UniqueName: \"kubernetes.io/projected/967fdf30-3d73-4e3f-9056-e270e10d3213-kube-api-access-snhs4\") pod \"cinder-scheduler-0\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " pod="openstack/cinder-scheduler-0" Jan 28 16:06:36 crc kubenswrapper[4903]: I0128 16:06:36.919542 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:06:37 crc kubenswrapper[4903]: I0128 16:06:37.433901 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:06:37 crc kubenswrapper[4903]: I0128 16:06:37.542948 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerStarted","Data":"bb58e6e7b3fb998dd25e7187d57e812a8ffd22754767eb329724dfb80a9cecfa"} Jan 28 16:06:37 crc kubenswrapper[4903]: I0128 16:06:37.544662 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"967fdf30-3d73-4e3f-9056-e270e10d3213","Type":"ContainerStarted","Data":"4842cbaff69ee464c8b52f74112164325757b5ceb67640132bbf740fd1b347bc"} Jan 28 16:06:37 crc kubenswrapper[4903]: I0128 16:06:37.597815 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:38 crc kubenswrapper[4903]: I0128 16:06:38.184254 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 16:06:38 crc kubenswrapper[4903]: I0128 16:06:38.459486 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85baae7-8974-44a7-801e-603564209257" path="/var/lib/kubelet/pods/d85baae7-8974-44a7-801e-603564209257/volumes" Jan 28 16:06:38 crc kubenswrapper[4903]: I0128 16:06:38.556593 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"967fdf30-3d73-4e3f-9056-e270e10d3213","Type":"ContainerStarted","Data":"ec452ecafe6bbdf14b8e60c7db18384312eea995612c19c665214db7b6ff8163"} Jan 28 16:06:39 crc kubenswrapper[4903]: I0128 16:06:39.582318 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerStarted","Data":"00ea57d1454b0c9e617fc2379c72affc83af76a8b09b95f41f4934d0ab93e9ad"} Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.593942 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"967fdf30-3d73-4e3f-9056-e270e10d3213","Type":"ContainerStarted","Data":"84e46dfe4c416722411c13edc8cb824e9b50a554e89df0cadc2ab7b6cbd19188"} Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.618070 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.618053034 podStartE2EDuration="4.618053034s" podCreationTimestamp="2026-01-28 16:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:40.617938101 +0000 UTC m=+1272.893909642" watchObservedRunningTime="2026-01-28 16:06:40.618053034 +0000 UTC m=+1272.894024555" Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.705891 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.773667 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.779172 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-756cdffcb8-s2nn9"] Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.779398 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-756cdffcb8-s2nn9" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-api" containerID="cri-o://01040a11f788e4571e2ef7dad1033cf47b4b204a8fef5289b42053b81549198c" gracePeriod=30 Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.779561 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-756cdffcb8-s2nn9" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-httpd" containerID="cri-o://d6878bfb9cae3d3ccde58fc06ed4eea3ec4003552e092da20838ae81544d9587" gracePeriod=30 Jan 28 16:06:40 crc kubenswrapper[4903]: I0128 16:06:40.786985 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:06:41 crc kubenswrapper[4903]: I0128 16:06:41.605128 4903 generic.go:334] "Generic (PLEG): container finished" podID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerID="d6878bfb9cae3d3ccde58fc06ed4eea3ec4003552e092da20838ae81544d9587" exitCode=0 Jan 28 16:06:41 crc kubenswrapper[4903]: I0128 16:06:41.605209 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756cdffcb8-s2nn9" event={"ID":"41c983f0-cfa7-48aa-9021-e570c07c4c43","Type":"ContainerDied","Data":"d6878bfb9cae3d3ccde58fc06ed4eea3ec4003552e092da20838ae81544d9587"} Jan 28 16:06:41 crc kubenswrapper[4903]: I0128 16:06:41.920588 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 16:06:44 crc kubenswrapper[4903]: I0128 16:06:44.333168 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:06:47 crc kubenswrapper[4903]: I0128 16:06:47.183157 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 16:06:48 crc kubenswrapper[4903]: E0128 16:06:48.643214 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944" Jan 28 16:06:48 crc kubenswrapper[4903]: E0128 16:06:48.643723 4903 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65bh644h67h66h56bh88h577h56bh8fh5fch686hf7h5fh65bh598h59dh97h557h8h54ch5f7hb4hc6hbfh58fhfbh586hc6h544hdch5bbh5fcq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dnvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(e1ce53ab-7d85-47b9-a886-162ef3726997): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:06:48 crc kubenswrapper[4903]: E0128 16:06:48.645073 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="e1ce53ab-7d85-47b9-a886-162ef3726997" Jan 28 16:06:48 crc kubenswrapper[4903]: E0128 16:06:48.676180 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944\\\"\"" pod="openstack/openstackclient" podUID="e1ce53ab-7d85-47b9-a886-162ef3726997" Jan 28 16:06:49 crc kubenswrapper[4903]: I0128 16:06:49.683979 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerStarted","Data":"928d84dfd649225fa5c1eb6237f229f51c806cb41bc72d9d06a671ea98cb939e"} Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.671934 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jpjph"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.674055 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.702196 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jpjph"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.717971 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerStarted","Data":"c3461eeb5e79b143d43fce38089e489d4ea3bc6fdb49c419922bbb6955ba83fd"} Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.718174 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-central-agent" containerID="cri-o://bb58e6e7b3fb998dd25e7187d57e812a8ffd22754767eb329724dfb80a9cecfa" gracePeriod=30 Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.718470 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-notification-agent" containerID="cri-o://00ea57d1454b0c9e617fc2379c72affc83af76a8b09b95f41f4934d0ab93e9ad" gracePeriod=30 Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.718480 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="proxy-httpd" containerID="cri-o://c3461eeb5e79b143d43fce38089e489d4ea3bc6fdb49c419922bbb6955ba83fd" gracePeriod=30 Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.718515 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.718480 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="sg-core" containerID="cri-o://928d84dfd649225fa5c1eb6237f229f51c806cb41bc72d9d06a671ea98cb939e" gracePeriod=30 Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.747241 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.924599253 podStartE2EDuration="16.747222703s" podCreationTimestamp="2026-01-28 16:06:35 +0000 UTC" firstStartedPulling="2026-01-28 16:06:36.48827618 +0000 UTC m=+1268.764247681" lastFinishedPulling="2026-01-28 16:06:51.31089962 +0000 UTC m=+1283.586871131" observedRunningTime="2026-01-28 16:06:51.739966375 +0000 UTC m=+1284.015937886" watchObservedRunningTime="2026-01-28 16:06:51.747222703 +0000 UTC m=+1284.023194214" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.763443 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-fwdxv"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.764615 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.780164 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fwdxv"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.795775 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4756c433-f387-49e6-ada4-56bec03547c5-operator-scripts\") pod \"nova-api-db-create-jpjph\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.795909 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/4756c433-f387-49e6-ada4-56bec03547c5-kube-api-access-75gc5\") pod \"nova-api-db-create-jpjph\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.873511 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c6e6-account-create-update-st6gx"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.874969 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.876756 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.895735 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c6e6-account-create-update-st6gx"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.896907 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4756c433-f387-49e6-ada4-56bec03547c5-operator-scripts\") pod \"nova-api-db-create-jpjph\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.896941 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30606f8f-095e-47cc-8784-9ea99eaf293a-operator-scripts\") pod \"nova-cell0-db-create-fwdxv\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.897011 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/4756c433-f387-49e6-ada4-56bec03547c5-kube-api-access-75gc5\") pod \"nova-api-db-create-jpjph\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.897031 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4kfg\" (UniqueName: \"kubernetes.io/projected/30606f8f-095e-47cc-8784-9ea99eaf293a-kube-api-access-l4kfg\") pod \"nova-cell0-db-create-fwdxv\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.897850 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4756c433-f387-49e6-ada4-56bec03547c5-operator-scripts\") pod \"nova-api-db-create-jpjph\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.919857 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/4756c433-f387-49e6-ada4-56bec03547c5-kube-api-access-75gc5\") pod \"nova-api-db-create-jpjph\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.966779 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q5hf4"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.968254 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.975283 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q5hf4"] Jan 28 16:06:51 crc kubenswrapper[4903]: I0128 16:06:51.998871 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.000891 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f38f215-5d58-4933-90c7-ccf27a223339-operator-scripts\") pod \"nova-api-c6e6-account-create-update-st6gx\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.000964 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqcg6\" (UniqueName: \"kubernetes.io/projected/7f38f215-5d58-4933-90c7-ccf27a223339-kube-api-access-xqcg6\") pod \"nova-api-c6e6-account-create-update-st6gx\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.001055 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4kfg\" (UniqueName: \"kubernetes.io/projected/30606f8f-095e-47cc-8784-9ea99eaf293a-kube-api-access-l4kfg\") pod \"nova-cell0-db-create-fwdxv\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.001221 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30606f8f-095e-47cc-8784-9ea99eaf293a-operator-scripts\") pod \"nova-cell0-db-create-fwdxv\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.003438 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30606f8f-095e-47cc-8784-9ea99eaf293a-operator-scripts\") pod \"nova-cell0-db-create-fwdxv\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.022419 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4kfg\" (UniqueName: \"kubernetes.io/projected/30606f8f-095e-47cc-8784-9ea99eaf293a-kube-api-access-l4kfg\") pod \"nova-cell0-db-create-fwdxv\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.081853 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c8dd-account-create-update-zmxgn"] Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.082947 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.085613 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.086687 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.104635 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9ffd7e-7027-4e36-ad58-163afe824cc5-operator-scripts\") pod \"nova-cell1-db-create-q5hf4\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.104990 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvgzw\" (UniqueName: \"kubernetes.io/projected/ac9ffd7e-7027-4e36-ad58-163afe824cc5-kube-api-access-kvgzw\") pod \"nova-cell1-db-create-q5hf4\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.105053 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f38f215-5d58-4933-90c7-ccf27a223339-operator-scripts\") pod \"nova-api-c6e6-account-create-update-st6gx\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.105088 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqcg6\" (UniqueName: \"kubernetes.io/projected/7f38f215-5d58-4933-90c7-ccf27a223339-kube-api-access-xqcg6\") pod \"nova-api-c6e6-account-create-update-st6gx\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.106371 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f38f215-5d58-4933-90c7-ccf27a223339-operator-scripts\") pod \"nova-api-c6e6-account-create-update-st6gx\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.112260 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c8dd-account-create-update-zmxgn"] Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.139043 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqcg6\" (UniqueName: \"kubernetes.io/projected/7f38f215-5d58-4933-90c7-ccf27a223339-kube-api-access-xqcg6\") pod \"nova-api-c6e6-account-create-update-st6gx\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.206355 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.207927 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmdwm\" (UniqueName: \"kubernetes.io/projected/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-kube-api-access-mmdwm\") pod \"nova-cell0-c8dd-account-create-update-zmxgn\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.208140 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9ffd7e-7027-4e36-ad58-163afe824cc5-operator-scripts\") pod \"nova-cell1-db-create-q5hf4\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.208248 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-operator-scripts\") pod \"nova-cell0-c8dd-account-create-update-zmxgn\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.208356 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvgzw\" (UniqueName: \"kubernetes.io/projected/ac9ffd7e-7027-4e36-ad58-163afe824cc5-kube-api-access-kvgzw\") pod \"nova-cell1-db-create-q5hf4\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.213871 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9ffd7e-7027-4e36-ad58-163afe824cc5-operator-scripts\") pod \"nova-cell1-db-create-q5hf4\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.230440 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvgzw\" (UniqueName: \"kubernetes.io/projected/ac9ffd7e-7027-4e36-ad58-163afe824cc5-kube-api-access-kvgzw\") pod \"nova-cell1-db-create-q5hf4\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.265192 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-njdbg"] Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.277398 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.280277 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.282104 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-njdbg"] Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.295237 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.309529 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-operator-scripts\") pod \"nova-cell0-c8dd-account-create-update-zmxgn\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.309646 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmdwm\" (UniqueName: \"kubernetes.io/projected/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-kube-api-access-mmdwm\") pod \"nova-cell0-c8dd-account-create-update-zmxgn\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.312789 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-operator-scripts\") pod \"nova-cell0-c8dd-account-create-update-zmxgn\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.333377 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmdwm\" (UniqueName: \"kubernetes.io/projected/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-kube-api-access-mmdwm\") pod \"nova-cell0-c8dd-account-create-update-zmxgn\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.411963 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqpzs\" (UniqueName: \"kubernetes.io/projected/69121677-f86b-414e-bcba-b7e808aff916-kube-api-access-wqpzs\") pod \"nova-cell1-4ff7-account-create-update-njdbg\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.412055 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69121677-f86b-414e-bcba-b7e808aff916-operator-scripts\") pod \"nova-cell1-4ff7-account-create-update-njdbg\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.417420 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.513857 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqpzs\" (UniqueName: \"kubernetes.io/projected/69121677-f86b-414e-bcba-b7e808aff916-kube-api-access-wqpzs\") pod \"nova-cell1-4ff7-account-create-update-njdbg\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.514331 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69121677-f86b-414e-bcba-b7e808aff916-operator-scripts\") pod \"nova-cell1-4ff7-account-create-update-njdbg\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.530931 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69121677-f86b-414e-bcba-b7e808aff916-operator-scripts\") pod \"nova-cell1-4ff7-account-create-update-njdbg\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.535829 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jpjph"] Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.536084 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqpzs\" (UniqueName: \"kubernetes.io/projected/69121677-f86b-414e-bcba-b7e808aff916-kube-api-access-wqpzs\") pod \"nova-cell1-4ff7-account-create-update-njdbg\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.674674 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fwdxv"] Jan 28 16:06:52 crc kubenswrapper[4903]: W0128 16:06:52.677688 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30606f8f_095e_47cc_8784_9ea99eaf293a.slice/crio-686b398144e5e531e6576938f0d3e0df818d8a56161128f95699fd59ed500262 WatchSource:0}: Error finding container 686b398144e5e531e6576938f0d3e0df818d8a56161128f95699fd59ed500262: Status 404 returned error can't find the container with id 686b398144e5e531e6576938f0d3e0df818d8a56161128f95699fd59ed500262 Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.737841 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fwdxv" event={"ID":"30606f8f-095e-47cc-8784-9ea99eaf293a","Type":"ContainerStarted","Data":"686b398144e5e531e6576938f0d3e0df818d8a56161128f95699fd59ed500262"} Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.743789 4903 generic.go:334] "Generic (PLEG): container finished" podID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerID="928d84dfd649225fa5c1eb6237f229f51c806cb41bc72d9d06a671ea98cb939e" exitCode=2 Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.743818 4903 generic.go:334] "Generic (PLEG): container finished" podID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerID="bb58e6e7b3fb998dd25e7187d57e812a8ffd22754767eb329724dfb80a9cecfa" exitCode=0 Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.743855 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerDied","Data":"928d84dfd649225fa5c1eb6237f229f51c806cb41bc72d9d06a671ea98cb939e"} Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.743882 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerDied","Data":"bb58e6e7b3fb998dd25e7187d57e812a8ffd22754767eb329724dfb80a9cecfa"} Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.745695 4903 generic.go:334] "Generic (PLEG): container finished" podID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerID="01040a11f788e4571e2ef7dad1033cf47b4b204a8fef5289b42053b81549198c" exitCode=0 Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.747207 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756cdffcb8-s2nn9" event={"ID":"41c983f0-cfa7-48aa-9021-e570c07c4c43","Type":"ContainerDied","Data":"01040a11f788e4571e2ef7dad1033cf47b4b204a8fef5289b42053b81549198c"} Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.749139 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jpjph" event={"ID":"4756c433-f387-49e6-ada4-56bec03547c5","Type":"ContainerStarted","Data":"7c0cb675d8f91311e4b0bd68922244f9575985d6b2651d18335b9f71aff2760e"} Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.764204 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.848966 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:52 crc kubenswrapper[4903]: I0128 16:06:52.882274 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c6e6-account-create-update-st6gx"] Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:52.999791 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q5hf4"] Jan 28 16:06:53 crc kubenswrapper[4903]: W0128 16:06:53.005317 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac9ffd7e_7027_4e36_ad58_163afe824cc5.slice/crio-bf124fefe3534047d2abc55de0886d227c5bd303a1d4140217435be0295b1a69 WatchSource:0}: Error finding container bf124fefe3534047d2abc55de0886d227c5bd303a1d4140217435be0295b1a69: Status 404 returned error can't find the container with id bf124fefe3534047d2abc55de0886d227c5bd303a1d4140217435be0295b1a69 Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.026009 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2grm\" (UniqueName: \"kubernetes.io/projected/41c983f0-cfa7-48aa-9021-e570c07c4c43-kube-api-access-k2grm\") pod \"41c983f0-cfa7-48aa-9021-e570c07c4c43\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.026112 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-httpd-config\") pod \"41c983f0-cfa7-48aa-9021-e570c07c4c43\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.026142 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-config\") pod \"41c983f0-cfa7-48aa-9021-e570c07c4c43\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.026207 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-combined-ca-bundle\") pod \"41c983f0-cfa7-48aa-9021-e570c07c4c43\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.026463 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-ovndb-tls-certs\") pod \"41c983f0-cfa7-48aa-9021-e570c07c4c43\" (UID: \"41c983f0-cfa7-48aa-9021-e570c07c4c43\") " Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.032829 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c983f0-cfa7-48aa-9021-e570c07c4c43-kube-api-access-k2grm" (OuterVolumeSpecName: "kube-api-access-k2grm") pod "41c983f0-cfa7-48aa-9021-e570c07c4c43" (UID: "41c983f0-cfa7-48aa-9021-e570c07c4c43"). InnerVolumeSpecName "kube-api-access-k2grm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.038792 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "41c983f0-cfa7-48aa-9021-e570c07c4c43" (UID: "41c983f0-cfa7-48aa-9021-e570c07c4c43"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.113234 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c8dd-account-create-update-zmxgn"] Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.130036 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2grm\" (UniqueName: \"kubernetes.io/projected/41c983f0-cfa7-48aa-9021-e570c07c4c43-kube-api-access-k2grm\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.130063 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.162255 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-config" (OuterVolumeSpecName: "config") pod "41c983f0-cfa7-48aa-9021-e570c07c4c43" (UID: "41c983f0-cfa7-48aa-9021-e570c07c4c43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.168008 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c983f0-cfa7-48aa-9021-e570c07c4c43" (UID: "41c983f0-cfa7-48aa-9021-e570c07c4c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.187472 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "41c983f0-cfa7-48aa-9021-e570c07c4c43" (UID: "41c983f0-cfa7-48aa-9021-e570c07c4c43"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.232046 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.232111 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.232127 4903 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/41c983f0-cfa7-48aa-9021-e570c07c4c43-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.368090 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-njdbg"] Jan 28 16:06:53 crc kubenswrapper[4903]: W0128 16:06:53.380324 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69121677_f86b_414e_bcba_b7e808aff916.slice/crio-00a73e4df268ad68ca2bfa616c1cb73e43034bbc1bd82d54f2bf18fd759affe2 WatchSource:0}: Error finding container 00a73e4df268ad68ca2bfa616c1cb73e43034bbc1bd82d54f2bf18fd759affe2: Status 404 returned error can't find the container with id 00a73e4df268ad68ca2bfa616c1cb73e43034bbc1bd82d54f2bf18fd759affe2 Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.772451 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fwdxv" event={"ID":"30606f8f-095e-47cc-8784-9ea99eaf293a","Type":"ContainerStarted","Data":"e504a8ff9406e8c82665b294595d875daa04f16ecc7011c455d97944fbe1af52"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.779733 4903 generic.go:334] "Generic (PLEG): container finished" podID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerID="00ea57d1454b0c9e617fc2379c72affc83af76a8b09b95f41f4934d0ab93e9ad" exitCode=0 Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.779916 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerDied","Data":"00ea57d1454b0c9e617fc2379c72affc83af76a8b09b95f41f4934d0ab93e9ad"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.782409 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-756cdffcb8-s2nn9" event={"ID":"41c983f0-cfa7-48aa-9021-e570c07c4c43","Type":"ContainerDied","Data":"c83e611f2a7f55d2ba25169d661f96f0a06d3fccac6aec0d905f91d893f0275e"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.782443 4903 scope.go:117] "RemoveContainer" containerID="d6878bfb9cae3d3ccde58fc06ed4eea3ec4003552e092da20838ae81544d9587" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.782593 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-756cdffcb8-s2nn9" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.785547 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" event={"ID":"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2","Type":"ContainerStarted","Data":"3d79c7fac1e05948f71a70b69d96880c808e2171372e84d17bd6b7678b6acf18"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.785574 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" event={"ID":"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2","Type":"ContainerStarted","Data":"27358a30251a4017863fbb0e18c62d10b43100c96b39fad399abcf9bd420ccaa"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.791786 4903 generic.go:334] "Generic (PLEG): container finished" podID="4756c433-f387-49e6-ada4-56bec03547c5" containerID="d809e13332af93721ef1dd254566bd94490c3996a4bf8acf4d8aef340c6f49cd" exitCode=0 Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.791877 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jpjph" event={"ID":"4756c433-f387-49e6-ada4-56bec03547c5","Type":"ContainerDied","Data":"d809e13332af93721ef1dd254566bd94490c3996a4bf8acf4d8aef340c6f49cd"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.796004 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q5hf4" event={"ID":"ac9ffd7e-7027-4e36-ad58-163afe824cc5","Type":"ContainerStarted","Data":"da3ec781d8646476efbdb53148c4a00afd58db09510fb0c855cd6849637a2a99"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.796042 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q5hf4" event={"ID":"ac9ffd7e-7027-4e36-ad58-163afe824cc5","Type":"ContainerStarted","Data":"bf124fefe3534047d2abc55de0886d227c5bd303a1d4140217435be0295b1a69"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.801686 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" event={"ID":"69121677-f86b-414e-bcba-b7e808aff916","Type":"ContainerStarted","Data":"1745bf5b44dc2b638f573aed0bd13f0c645dc6779b31ff17507a4f55bf433cbe"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.801948 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" event={"ID":"69121677-f86b-414e-bcba-b7e808aff916","Type":"ContainerStarted","Data":"00a73e4df268ad68ca2bfa616c1cb73e43034bbc1bd82d54f2bf18fd759affe2"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.804424 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-fwdxv" podStartSLOduration=2.804399419 podStartE2EDuration="2.804399419s" podCreationTimestamp="2026-01-28 16:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:53.793490882 +0000 UTC m=+1286.069462393" watchObservedRunningTime="2026-01-28 16:06:53.804399419 +0000 UTC m=+1286.080370930" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.806799 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6e6-account-create-update-st6gx" event={"ID":"7f38f215-5d58-4933-90c7-ccf27a223339","Type":"ContainerStarted","Data":"ffb559a01621d570504e53e37340cccc96d1714cec7712f2a1c2850cc3db6fee"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.806958 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6e6-account-create-update-st6gx" event={"ID":"7f38f215-5d58-4933-90c7-ccf27a223339","Type":"ContainerStarted","Data":"286d09100fa7100b37a8154583fde3139739b94b8445e7249b2b91fc493f0e61"} Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.828477 4903 scope.go:117] "RemoveContainer" containerID="01040a11f788e4571e2ef7dad1033cf47b4b204a8fef5289b42053b81549198c" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.836063 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" podStartSLOduration=1.836022432 podStartE2EDuration="1.836022432s" podCreationTimestamp="2026-01-28 16:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:53.826479912 +0000 UTC m=+1286.102451443" watchObservedRunningTime="2026-01-28 16:06:53.836022432 +0000 UTC m=+1286.111993943" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.850374 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-c6e6-account-create-update-st6gx" podStartSLOduration=2.850352662 podStartE2EDuration="2.850352662s" podCreationTimestamp="2026-01-28 16:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:53.846902048 +0000 UTC m=+1286.122873559" watchObservedRunningTime="2026-01-28 16:06:53.850352662 +0000 UTC m=+1286.126324173" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.874443 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-q5hf4" podStartSLOduration=2.874426708 podStartE2EDuration="2.874426708s" podCreationTimestamp="2026-01-28 16:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:53.864938779 +0000 UTC m=+1286.140910290" watchObservedRunningTime="2026-01-28 16:06:53.874426708 +0000 UTC m=+1286.150398209" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.887804 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" podStartSLOduration=1.8877853930000001 podStartE2EDuration="1.887785393s" podCreationTimestamp="2026-01-28 16:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:06:53.881919082 +0000 UTC m=+1286.157890593" watchObservedRunningTime="2026-01-28 16:06:53.887785393 +0000 UTC m=+1286.163756904" Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.908767 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-756cdffcb8-s2nn9"] Jan 28 16:06:53 crc kubenswrapper[4903]: I0128 16:06:53.916945 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-756cdffcb8-s2nn9"] Jan 28 16:06:54 crc kubenswrapper[4903]: I0128 16:06:54.423161 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" path="/var/lib/kubelet/pods/41c983f0-cfa7-48aa-9021-e570c07c4c43/volumes" Jan 28 16:06:54 crc kubenswrapper[4903]: I0128 16:06:54.819308 4903 generic.go:334] "Generic (PLEG): container finished" podID="ac9ffd7e-7027-4e36-ad58-163afe824cc5" containerID="da3ec781d8646476efbdb53148c4a00afd58db09510fb0c855cd6849637a2a99" exitCode=0 Jan 28 16:06:54 crc kubenswrapper[4903]: I0128 16:06:54.819393 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q5hf4" event={"ID":"ac9ffd7e-7027-4e36-ad58-163afe824cc5","Type":"ContainerDied","Data":"da3ec781d8646476efbdb53148c4a00afd58db09510fb0c855cd6849637a2a99"} Jan 28 16:06:54 crc kubenswrapper[4903]: I0128 16:06:54.822207 4903 generic.go:334] "Generic (PLEG): container finished" podID="30606f8f-095e-47cc-8784-9ea99eaf293a" containerID="e504a8ff9406e8c82665b294595d875daa04f16ecc7011c455d97944fbe1af52" exitCode=0 Jan 28 16:06:54 crc kubenswrapper[4903]: I0128 16:06:54.822262 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fwdxv" event={"ID":"30606f8f-095e-47cc-8784-9ea99eaf293a","Type":"ContainerDied","Data":"e504a8ff9406e8c82665b294595d875daa04f16ecc7011c455d97944fbe1af52"} Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.200256 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.369329 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/4756c433-f387-49e6-ada4-56bec03547c5-kube-api-access-75gc5\") pod \"4756c433-f387-49e6-ada4-56bec03547c5\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.369465 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4756c433-f387-49e6-ada4-56bec03547c5-operator-scripts\") pod \"4756c433-f387-49e6-ada4-56bec03547c5\" (UID: \"4756c433-f387-49e6-ada4-56bec03547c5\") " Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.370390 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4756c433-f387-49e6-ada4-56bec03547c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4756c433-f387-49e6-ada4-56bec03547c5" (UID: "4756c433-f387-49e6-ada4-56bec03547c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.375863 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4756c433-f387-49e6-ada4-56bec03547c5-kube-api-access-75gc5" (OuterVolumeSpecName: "kube-api-access-75gc5") pod "4756c433-f387-49e6-ada4-56bec03547c5" (UID: "4756c433-f387-49e6-ada4-56bec03547c5"). InnerVolumeSpecName "kube-api-access-75gc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.471296 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/4756c433-f387-49e6-ada4-56bec03547c5-kube-api-access-75gc5\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.471619 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4756c433-f387-49e6-ada4-56bec03547c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.833775 4903 generic.go:334] "Generic (PLEG): container finished" podID="69121677-f86b-414e-bcba-b7e808aff916" containerID="1745bf5b44dc2b638f573aed0bd13f0c645dc6779b31ff17507a4f55bf433cbe" exitCode=0 Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.833840 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" event={"ID":"69121677-f86b-414e-bcba-b7e808aff916","Type":"ContainerDied","Data":"1745bf5b44dc2b638f573aed0bd13f0c645dc6779b31ff17507a4f55bf433cbe"} Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.835776 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jpjph" event={"ID":"4756c433-f387-49e6-ada4-56bec03547c5","Type":"ContainerDied","Data":"7c0cb675d8f91311e4b0bd68922244f9575985d6b2651d18335b9f71aff2760e"} Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.835854 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c0cb675d8f91311e4b0bd68922244f9575985d6b2651d18335b9f71aff2760e" Jan 28 16:06:55 crc kubenswrapper[4903]: I0128 16:06:55.835811 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jpjph" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.278642 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.288053 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.386514 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9ffd7e-7027-4e36-ad58-163afe824cc5-operator-scripts\") pod \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.386632 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4kfg\" (UniqueName: \"kubernetes.io/projected/30606f8f-095e-47cc-8784-9ea99eaf293a-kube-api-access-l4kfg\") pod \"30606f8f-095e-47cc-8784-9ea99eaf293a\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.386667 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvgzw\" (UniqueName: \"kubernetes.io/projected/ac9ffd7e-7027-4e36-ad58-163afe824cc5-kube-api-access-kvgzw\") pod \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\" (UID: \"ac9ffd7e-7027-4e36-ad58-163afe824cc5\") " Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.386743 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30606f8f-095e-47cc-8784-9ea99eaf293a-operator-scripts\") pod \"30606f8f-095e-47cc-8784-9ea99eaf293a\" (UID: \"30606f8f-095e-47cc-8784-9ea99eaf293a\") " Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.387280 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9ffd7e-7027-4e36-ad58-163afe824cc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac9ffd7e-7027-4e36-ad58-163afe824cc5" (UID: "ac9ffd7e-7027-4e36-ad58-163afe824cc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.387497 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30606f8f-095e-47cc-8784-9ea99eaf293a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30606f8f-095e-47cc-8784-9ea99eaf293a" (UID: "30606f8f-095e-47cc-8784-9ea99eaf293a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.392956 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30606f8f-095e-47cc-8784-9ea99eaf293a-kube-api-access-l4kfg" (OuterVolumeSpecName: "kube-api-access-l4kfg") pod "30606f8f-095e-47cc-8784-9ea99eaf293a" (UID: "30606f8f-095e-47cc-8784-9ea99eaf293a"). InnerVolumeSpecName "kube-api-access-l4kfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.394117 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9ffd7e-7027-4e36-ad58-163afe824cc5-kube-api-access-kvgzw" (OuterVolumeSpecName: "kube-api-access-kvgzw") pod "ac9ffd7e-7027-4e36-ad58-163afe824cc5" (UID: "ac9ffd7e-7027-4e36-ad58-163afe824cc5"). InnerVolumeSpecName "kube-api-access-kvgzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.496581 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac9ffd7e-7027-4e36-ad58-163afe824cc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.496622 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4kfg\" (UniqueName: \"kubernetes.io/projected/30606f8f-095e-47cc-8784-9ea99eaf293a-kube-api-access-l4kfg\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.496633 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvgzw\" (UniqueName: \"kubernetes.io/projected/ac9ffd7e-7027-4e36-ad58-163afe824cc5-kube-api-access-kvgzw\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.496644 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30606f8f-095e-47cc-8784-9ea99eaf293a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.613298 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.613612 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:06:56 crc kubenswrapper[4903]: E0128 16:06:56.632702 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac9ffd7e_7027_4e36_ad58_163afe824cc5.slice\": RecentStats: unable to find data in memory cache]" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.849083 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fwdxv" event={"ID":"30606f8f-095e-47cc-8784-9ea99eaf293a","Type":"ContainerDied","Data":"686b398144e5e531e6576938f0d3e0df818d8a56161128f95699fd59ed500262"} Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.850745 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="686b398144e5e531e6576938f0d3e0df818d8a56161128f95699fd59ed500262" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.849306 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fwdxv" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.851430 4903 generic.go:334] "Generic (PLEG): container finished" podID="ca0f3bda-8e27-4887-b3e2-8b04b92d65b2" containerID="3d79c7fac1e05948f71a70b69d96880c808e2171372e84d17bd6b7678b6acf18" exitCode=0 Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.851615 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" event={"ID":"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2","Type":"ContainerDied","Data":"3d79c7fac1e05948f71a70b69d96880c808e2171372e84d17bd6b7678b6acf18"} Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.854380 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q5hf4" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.856350 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q5hf4" event={"ID":"ac9ffd7e-7027-4e36-ad58-163afe824cc5","Type":"ContainerDied","Data":"bf124fefe3534047d2abc55de0886d227c5bd303a1d4140217435be0295b1a69"} Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.856411 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf124fefe3534047d2abc55de0886d227c5bd303a1d4140217435be0295b1a69" Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.862185 4903 generic.go:334] "Generic (PLEG): container finished" podID="7f38f215-5d58-4933-90c7-ccf27a223339" containerID="ffb559a01621d570504e53e37340cccc96d1714cec7712f2a1c2850cc3db6fee" exitCode=0 Jan 28 16:06:56 crc kubenswrapper[4903]: I0128 16:06:56.862293 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6e6-account-create-update-st6gx" event={"ID":"7f38f215-5d58-4933-90c7-ccf27a223339","Type":"ContainerDied","Data":"ffb559a01621d570504e53e37340cccc96d1714cec7712f2a1c2850cc3db6fee"} Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.209368 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.311450 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69121677-f86b-414e-bcba-b7e808aff916-operator-scripts\") pod \"69121677-f86b-414e-bcba-b7e808aff916\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.311980 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69121677-f86b-414e-bcba-b7e808aff916-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69121677-f86b-414e-bcba-b7e808aff916" (UID: "69121677-f86b-414e-bcba-b7e808aff916"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.312054 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqpzs\" (UniqueName: \"kubernetes.io/projected/69121677-f86b-414e-bcba-b7e808aff916-kube-api-access-wqpzs\") pod \"69121677-f86b-414e-bcba-b7e808aff916\" (UID: \"69121677-f86b-414e-bcba-b7e808aff916\") " Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.312833 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69121677-f86b-414e-bcba-b7e808aff916-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.323086 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69121677-f86b-414e-bcba-b7e808aff916-kube-api-access-wqpzs" (OuterVolumeSpecName: "kube-api-access-wqpzs") pod "69121677-f86b-414e-bcba-b7e808aff916" (UID: "69121677-f86b-414e-bcba-b7e808aff916"). InnerVolumeSpecName "kube-api-access-wqpzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.414039 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqpzs\" (UniqueName: \"kubernetes.io/projected/69121677-f86b-414e-bcba-b7e808aff916-kube-api-access-wqpzs\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.900549 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.901422 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ff7-account-create-update-njdbg" event={"ID":"69121677-f86b-414e-bcba-b7e808aff916","Type":"ContainerDied","Data":"00a73e4df268ad68ca2bfa616c1cb73e43034bbc1bd82d54f2bf18fd759affe2"} Jan 28 16:06:57 crc kubenswrapper[4903]: I0128 16:06:57.901461 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a73e4df268ad68ca2bfa616c1cb73e43034bbc1bd82d54f2bf18fd759affe2" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.381353 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.388373 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.533473 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f38f215-5d58-4933-90c7-ccf27a223339-operator-scripts\") pod \"7f38f215-5d58-4933-90c7-ccf27a223339\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.533613 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmdwm\" (UniqueName: \"kubernetes.io/projected/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-kube-api-access-mmdwm\") pod \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.533710 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-operator-scripts\") pod \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\" (UID: \"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2\") " Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.533868 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqcg6\" (UniqueName: \"kubernetes.io/projected/7f38f215-5d58-4933-90c7-ccf27a223339-kube-api-access-xqcg6\") pod \"7f38f215-5d58-4933-90c7-ccf27a223339\" (UID: \"7f38f215-5d58-4933-90c7-ccf27a223339\") " Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.534442 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca0f3bda-8e27-4887-b3e2-8b04b92d65b2" (UID: "ca0f3bda-8e27-4887-b3e2-8b04b92d65b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.535246 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f38f215-5d58-4933-90c7-ccf27a223339-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f38f215-5d58-4933-90c7-ccf27a223339" (UID: "7f38f215-5d58-4933-90c7-ccf27a223339"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.538419 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f38f215-5d58-4933-90c7-ccf27a223339-kube-api-access-xqcg6" (OuterVolumeSpecName: "kube-api-access-xqcg6") pod "7f38f215-5d58-4933-90c7-ccf27a223339" (UID: "7f38f215-5d58-4933-90c7-ccf27a223339"). InnerVolumeSpecName "kube-api-access-xqcg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.544808 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-kube-api-access-mmdwm" (OuterVolumeSpecName: "kube-api-access-mmdwm") pod "ca0f3bda-8e27-4887-b3e2-8b04b92d65b2" (UID: "ca0f3bda-8e27-4887-b3e2-8b04b92d65b2"). InnerVolumeSpecName "kube-api-access-mmdwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.635596 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqcg6\" (UniqueName: \"kubernetes.io/projected/7f38f215-5d58-4933-90c7-ccf27a223339-kube-api-access-xqcg6\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.635635 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f38f215-5d58-4933-90c7-ccf27a223339-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.635650 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmdwm\" (UniqueName: \"kubernetes.io/projected/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-kube-api-access-mmdwm\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.635663 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.914628 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6e6-account-create-update-st6gx" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.914628 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6e6-account-create-update-st6gx" event={"ID":"7f38f215-5d58-4933-90c7-ccf27a223339","Type":"ContainerDied","Data":"286d09100fa7100b37a8154583fde3139739b94b8445e7249b2b91fc493f0e61"} Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.914688 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="286d09100fa7100b37a8154583fde3139739b94b8445e7249b2b91fc493f0e61" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.919811 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" event={"ID":"ca0f3bda-8e27-4887-b3e2-8b04b92d65b2","Type":"ContainerDied","Data":"27358a30251a4017863fbb0e18c62d10b43100c96b39fad399abcf9bd420ccaa"} Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.919860 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27358a30251a4017863fbb0e18c62d10b43100c96b39fad399abcf9bd420ccaa" Jan 28 16:06:58 crc kubenswrapper[4903]: I0128 16:06:58.919874 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c8dd-account-create-update-zmxgn" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.326944 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xvrh9"] Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.328607 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-httpd" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.328679 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-httpd" Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.328740 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4756c433-f387-49e6-ada4-56bec03547c5" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.328788 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4756c433-f387-49e6-ada4-56bec03547c5" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.328857 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0f3bda-8e27-4887-b3e2-8b04b92d65b2" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.328907 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0f3bda-8e27-4887-b3e2-8b04b92d65b2" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.328953 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-api" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329002 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-api" Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.329057 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9ffd7e-7027-4e36-ad58-163afe824cc5" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329104 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9ffd7e-7027-4e36-ad58-163afe824cc5" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.329169 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69121677-f86b-414e-bcba-b7e808aff916" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329218 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="69121677-f86b-414e-bcba-b7e808aff916" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.329272 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f38f215-5d58-4933-90c7-ccf27a223339" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329324 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f38f215-5d58-4933-90c7-ccf27a223339" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: E0128 16:07:02.329382 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30606f8f-095e-47cc-8784-9ea99eaf293a" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329437 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="30606f8f-095e-47cc-8784-9ea99eaf293a" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329663 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4756c433-f387-49e6-ada4-56bec03547c5" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329737 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="69121677-f86b-414e-bcba-b7e808aff916" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329792 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-httpd" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329845 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0f3bda-8e27-4887-b3e2-8b04b92d65b2" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329896 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="30606f8f-095e-47cc-8784-9ea99eaf293a" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.329949 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f38f215-5d58-4933-90c7-ccf27a223339" containerName="mariadb-account-create-update" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.330002 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c983f0-cfa7-48aa-9021-e570c07c4c43" containerName="neutron-api" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.330066 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9ffd7e-7027-4e36-ad58-163afe824cc5" containerName="mariadb-database-create" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.330739 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.334211 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.334422 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.334586 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nvkh6" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.342337 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xvrh9"] Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.508419 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzkj\" (UniqueName: \"kubernetes.io/projected/94d476df-369e-428e-945d-f2a3dc1a78ea-kube-api-access-qdzkj\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.508486 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.508632 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-config-data\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.508777 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-scripts\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.610509 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzkj\" (UniqueName: \"kubernetes.io/projected/94d476df-369e-428e-945d-f2a3dc1a78ea-kube-api-access-qdzkj\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.610597 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.610645 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-config-data\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.610753 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-scripts\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.615389 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.616164 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-config-data\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.617035 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-scripts\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.642100 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzkj\" (UniqueName: \"kubernetes.io/projected/94d476df-369e-428e-945d-f2a3dc1a78ea-kube-api-access-qdzkj\") pod \"nova-cell0-conductor-db-sync-xvrh9\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.653870 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.956587 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e1ce53ab-7d85-47b9-a886-162ef3726997","Type":"ContainerStarted","Data":"79b4ee686b25bbef16eefb66785f1f74ebe67f05a47f44b4dfa49ba85ce6d221"} Jan 28 16:07:02 crc kubenswrapper[4903]: I0128 16:07:02.979674 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.835075727 podStartE2EDuration="31.979654406s" podCreationTimestamp="2026-01-28 16:06:31 +0000 UTC" firstStartedPulling="2026-01-28 16:06:32.68086872 +0000 UTC m=+1264.956840231" lastFinishedPulling="2026-01-28 16:07:01.825447399 +0000 UTC m=+1294.101418910" observedRunningTime="2026-01-28 16:07:02.972274935 +0000 UTC m=+1295.248246446" watchObservedRunningTime="2026-01-28 16:07:02.979654406 +0000 UTC m=+1295.255625927" Jan 28 16:07:03 crc kubenswrapper[4903]: I0128 16:07:03.153198 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xvrh9"] Jan 28 16:07:03 crc kubenswrapper[4903]: I0128 16:07:03.968722 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" event={"ID":"94d476df-369e-428e-945d-f2a3dc1a78ea","Type":"ContainerStarted","Data":"5457310cf37aefedf7cf0bdeb45ae66bb2a0ef92a53db00a311c998b61b3731b"} Jan 28 16:07:04 crc kubenswrapper[4903]: I0128 16:07:04.278577 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:07:04 crc kubenswrapper[4903]: I0128 16:07:04.278866 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-log" containerID="cri-o://e60036d9f4a459543f76f778dfe1619ad283a64587191bebb3dcd09d034ce5f1" gracePeriod=30 Jan 28 16:07:04 crc kubenswrapper[4903]: I0128 16:07:04.278948 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-httpd" containerID="cri-o://6302c18d46fac1f965887e2e9661489f11c7f3c94dd5110d755905bdd97cf914" gracePeriod=30 Jan 28 16:07:04 crc kubenswrapper[4903]: I0128 16:07:04.980986 4903 generic.go:334] "Generic (PLEG): container finished" podID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerID="e60036d9f4a459543f76f778dfe1619ad283a64587191bebb3dcd09d034ce5f1" exitCode=143 Jan 28 16:07:04 crc kubenswrapper[4903]: I0128 16:07:04.981110 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40392bf6-fb24-41cb-b61a-2b6d768b3f9b","Type":"ContainerDied","Data":"e60036d9f4a459543f76f778dfe1619ad283a64587191bebb3dcd09d034ce5f1"} Jan 28 16:07:05 crc kubenswrapper[4903]: I0128 16:07:05.934089 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 16:07:07 crc kubenswrapper[4903]: I0128 16:07:07.890733 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": dial tcp 10.217.0.149:9292: connect: connection refused" Jan 28 16:07:07 crc kubenswrapper[4903]: I0128 16:07:07.890733 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.149:9292/healthcheck\": dial tcp 10.217.0.149:9292: connect: connection refused" Jan 28 16:07:09 crc kubenswrapper[4903]: I0128 16:07:09.035969 4903 generic.go:334] "Generic (PLEG): container finished" podID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerID="6302c18d46fac1f965887e2e9661489f11c7f3c94dd5110d755905bdd97cf914" exitCode=0 Jan 28 16:07:09 crc kubenswrapper[4903]: I0128 16:07:09.036199 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40392bf6-fb24-41cb-b61a-2b6d768b3f9b","Type":"ContainerDied","Data":"6302c18d46fac1f965887e2e9661489f11c7f3c94dd5110d755905bdd97cf914"} Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.529236 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.685798 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-logs\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.685900 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-config-data\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686017 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-httpd-run\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686050 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-combined-ca-bundle\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686070 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-internal-tls-certs\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686085 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-scripts\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686103 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp6xc\" (UniqueName: \"kubernetes.io/projected/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-kube-api-access-sp6xc\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686129 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\" (UID: \"40392bf6-fb24-41cb-b61a-2b6d768b3f9b\") " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686500 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-logs" (OuterVolumeSpecName: "logs") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.686734 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.691768 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-scripts" (OuterVolumeSpecName: "scripts") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.695098 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.696837 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-kube-api-access-sp6xc" (OuterVolumeSpecName: "kube-api-access-sp6xc") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "kube-api-access-sp6xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.724840 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.741579 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.748502 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-config-data" (OuterVolumeSpecName: "config-data") pod "40392bf6-fb24-41cb-b61a-2b6d768b3f9b" (UID: "40392bf6-fb24-41cb-b61a-2b6d768b3f9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788278 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788417 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788435 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788449 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788460 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp6xc\" (UniqueName: \"kubernetes.io/projected/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-kube-api-access-sp6xc\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788490 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788504 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.788515 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40392bf6-fb24-41cb-b61a-2b6d768b3f9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.806637 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 28 16:07:10 crc kubenswrapper[4903]: I0128 16:07:10.890145 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.052010 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.052337 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"40392bf6-fb24-41cb-b61a-2b6d768b3f9b","Type":"ContainerDied","Data":"c52214e41daaffc0ab7e69d23de1a8abffdfc3a943332181afdbe3872807c24a"} Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.052407 4903 scope.go:117] "RemoveContainer" containerID="6302c18d46fac1f965887e2e9661489f11c7f3c94dd5110d755905bdd97cf914" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.053680 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" event={"ID":"94d476df-369e-428e-945d-f2a3dc1a78ea","Type":"ContainerStarted","Data":"be6539b41f3ffef5e41bc2a35bcc5c813d4bb875f67dde5ceb756b4027a5dd69"} Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.076759 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" podStartSLOduration=2.011499159 podStartE2EDuration="9.076741928s" podCreationTimestamp="2026-01-28 16:07:02 +0000 UTC" firstStartedPulling="2026-01-28 16:07:03.157600006 +0000 UTC m=+1295.433571517" lastFinishedPulling="2026-01-28 16:07:10.222842765 +0000 UTC m=+1302.498814286" observedRunningTime="2026-01-28 16:07:11.070813596 +0000 UTC m=+1303.346785107" watchObservedRunningTime="2026-01-28 16:07:11.076741928 +0000 UTC m=+1303.352713429" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.080316 4903 scope.go:117] "RemoveContainer" containerID="e60036d9f4a459543f76f778dfe1619ad283a64587191bebb3dcd09d034ce5f1" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.102628 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.115944 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.135972 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:07:11 crc kubenswrapper[4903]: E0128 16:07:11.136495 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-log" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.136514 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-log" Jan 28 16:07:11 crc kubenswrapper[4903]: E0128 16:07:11.136562 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-httpd" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.136571 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-httpd" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.136768 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-httpd" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.136796 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" containerName="glance-log" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.137777 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.139606 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.140145 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.144288 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296115 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296166 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296201 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296223 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296360 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296629 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpc8k\" (UniqueName: \"kubernetes.io/projected/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-kube-api-access-zpc8k\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296915 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.296998 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399099 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399184 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399231 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399252 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399290 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399306 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399332 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399388 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpc8k\" (UniqueName: \"kubernetes.io/projected/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-kube-api-access-zpc8k\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.399948 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.400126 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.400769 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.405223 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.405267 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.415696 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.416088 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.419392 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpc8k\" (UniqueName: \"kubernetes.io/projected/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-kube-api-access-zpc8k\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.427378 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " pod="openstack/glance-default-internal-api-0" Jan 28 16:07:11 crc kubenswrapper[4903]: I0128 16:07:11.502548 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:12 crc kubenswrapper[4903]: I0128 16:07:12.078763 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:07:12 crc kubenswrapper[4903]: W0128 16:07:12.081820 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c3ca866_aac2_4b4f_ac25_71e741d9db2f.slice/crio-26e3bda1ae259924517b42ce507802f2aee0acb2100f04be9a88c6da9afbc546 WatchSource:0}: Error finding container 26e3bda1ae259924517b42ce507802f2aee0acb2100f04be9a88c6da9afbc546: Status 404 returned error can't find the container with id 26e3bda1ae259924517b42ce507802f2aee0acb2100f04be9a88c6da9afbc546 Jan 28 16:07:12 crc kubenswrapper[4903]: I0128 16:07:12.426457 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40392bf6-fb24-41cb-b61a-2b6d768b3f9b" path="/var/lib/kubelet/pods/40392bf6-fb24-41cb-b61a-2b6d768b3f9b/volumes" Jan 28 16:07:13 crc kubenswrapper[4903]: I0128 16:07:13.074998 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c3ca866-aac2-4b4f-ac25-71e741d9db2f","Type":"ContainerStarted","Data":"c34ec1bdca9dcf388b45d4df31616bfc2ee16b7a70a6f94f04662492238c5d30"} Jan 28 16:07:13 crc kubenswrapper[4903]: I0128 16:07:13.075051 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c3ca866-aac2-4b4f-ac25-71e741d9db2f","Type":"ContainerStarted","Data":"26e3bda1ae259924517b42ce507802f2aee0acb2100f04be9a88c6da9afbc546"} Jan 28 16:07:14 crc kubenswrapper[4903]: I0128 16:07:14.084015 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c3ca866-aac2-4b4f-ac25-71e741d9db2f","Type":"ContainerStarted","Data":"b0fb34b235f11adc68d9beed30603f223ccc79ee9902295559769c17c5aa973b"} Jan 28 16:07:14 crc kubenswrapper[4903]: I0128 16:07:14.108346 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.1083256009999998 podStartE2EDuration="3.108325601s" podCreationTimestamp="2026-01-28 16:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:07:14.103329776 +0000 UTC m=+1306.379301317" watchObservedRunningTime="2026-01-28 16:07:14.108325601 +0000 UTC m=+1306.384297122" Jan 28 16:07:14 crc kubenswrapper[4903]: I0128 16:07:14.654886 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:07:14 crc kubenswrapper[4903]: I0128 16:07:14.655174 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-log" containerID="cri-o://6d3f8b7a72c94efad6f7723a41564b3b72e80239ea0797b5173fa8f40d6d1376" gracePeriod=30 Jan 28 16:07:14 crc kubenswrapper[4903]: I0128 16:07:14.655279 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-httpd" containerID="cri-o://f4190431caf39e1cf62f8df34560e8922fa98469cc57b19abf0293f2b23bc912" gracePeriod=30 Jan 28 16:07:15 crc kubenswrapper[4903]: I0128 16:07:15.095316 4903 generic.go:334] "Generic (PLEG): container finished" podID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerID="6d3f8b7a72c94efad6f7723a41564b3b72e80239ea0797b5173fa8f40d6d1376" exitCode=143 Jan 28 16:07:15 crc kubenswrapper[4903]: I0128 16:07:15.095411 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c73a1965-ccff-43eb-a317-91ca6e551c4e","Type":"ContainerDied","Data":"6d3f8b7a72c94efad6f7723a41564b3b72e80239ea0797b5173fa8f40d6d1376"} Jan 28 16:07:17 crc kubenswrapper[4903]: I0128 16:07:17.725010 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:35000->10.217.0.150:9292: read: connection reset by peer" Jan 28 16:07:17 crc kubenswrapper[4903]: I0128 16:07:17.725099 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.150:9292/healthcheck\": read tcp 10.217.0.2:34986->10.217.0.150:9292: read: connection reset by peer" Jan 28 16:07:20 crc kubenswrapper[4903]: I0128 16:07:20.155215 4903 generic.go:334] "Generic (PLEG): container finished" podID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerID="f4190431caf39e1cf62f8df34560e8922fa98469cc57b19abf0293f2b23bc912" exitCode=0 Jan 28 16:07:20 crc kubenswrapper[4903]: I0128 16:07:20.155316 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c73a1965-ccff-43eb-a317-91ca6e551c4e","Type":"ContainerDied","Data":"f4190431caf39e1cf62f8df34560e8922fa98469cc57b19abf0293f2b23bc912"} Jan 28 16:07:21 crc kubenswrapper[4903]: I0128 16:07:21.503254 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:21 crc kubenswrapper[4903]: I0128 16:07:21.503541 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:21 crc kubenswrapper[4903]: I0128 16:07:21.542626 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:21 crc kubenswrapper[4903]: I0128 16:07:21.550027 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:22 crc kubenswrapper[4903]: I0128 16:07:22.173195 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:22 crc kubenswrapper[4903]: I0128 16:07:22.173546 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:23 crc kubenswrapper[4903]: I0128 16:07:23.185062 4903 generic.go:334] "Generic (PLEG): container finished" podID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerID="c3461eeb5e79b143d43fce38089e489d4ea3bc6fdb49c419922bbb6955ba83fd" exitCode=137 Jan 28 16:07:23 crc kubenswrapper[4903]: I0128 16:07:23.185108 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerDied","Data":"c3461eeb5e79b143d43fce38089e489d4ea3bc6fdb49c419922bbb6955ba83fd"} Jan 28 16:07:23 crc kubenswrapper[4903]: I0128 16:07:23.964093 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:24 crc kubenswrapper[4903]: I0128 16:07:24.063412 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 16:07:26 crc kubenswrapper[4903]: I0128 16:07:26.613423 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:07:26 crc kubenswrapper[4903]: I0128 16:07:26.614018 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:07:26 crc kubenswrapper[4903]: I0128 16:07:26.614064 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:07:26 crc kubenswrapper[4903]: I0128 16:07:26.614905 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"993067151bbc38bd867efd2a0048a350ec2c3e1b2fa7b3b79554189c276ba379"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:07:26 crc kubenswrapper[4903]: I0128 16:07:26.614976 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://993067151bbc38bd867efd2a0048a350ec2c3e1b2fa7b3b79554189c276ba379" gracePeriod=600 Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.408164 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.499520 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-config-data\") pod \"bfff7b8a-803b-4945-90a3-d135faedfe34\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.499615 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-run-httpd\") pod \"bfff7b8a-803b-4945-90a3-d135faedfe34\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.499680 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-scripts\") pod \"bfff7b8a-803b-4945-90a3-d135faedfe34\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.499771 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-combined-ca-bundle\") pod \"bfff7b8a-803b-4945-90a3-d135faedfe34\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.499805 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-log-httpd\") pod \"bfff7b8a-803b-4945-90a3-d135faedfe34\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.499855 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-sg-core-conf-yaml\") pod \"bfff7b8a-803b-4945-90a3-d135faedfe34\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.499911 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsz4l\" (UniqueName: \"kubernetes.io/projected/bfff7b8a-803b-4945-90a3-d135faedfe34-kube-api-access-dsz4l\") pod \"bfff7b8a-803b-4945-90a3-d135faedfe34\" (UID: \"bfff7b8a-803b-4945-90a3-d135faedfe34\") " Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.500311 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bfff7b8a-803b-4945-90a3-d135faedfe34" (UID: "bfff7b8a-803b-4945-90a3-d135faedfe34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.500497 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.503685 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bfff7b8a-803b-4945-90a3-d135faedfe34" (UID: "bfff7b8a-803b-4945-90a3-d135faedfe34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.506671 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfff7b8a-803b-4945-90a3-d135faedfe34-kube-api-access-dsz4l" (OuterVolumeSpecName: "kube-api-access-dsz4l") pod "bfff7b8a-803b-4945-90a3-d135faedfe34" (UID: "bfff7b8a-803b-4945-90a3-d135faedfe34"). InnerVolumeSpecName "kube-api-access-dsz4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.515017 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-scripts" (OuterVolumeSpecName: "scripts") pod "bfff7b8a-803b-4945-90a3-d135faedfe34" (UID: "bfff7b8a-803b-4945-90a3-d135faedfe34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.544181 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bfff7b8a-803b-4945-90a3-d135faedfe34" (UID: "bfff7b8a-803b-4945-90a3-d135faedfe34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.576137 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfff7b8a-803b-4945-90a3-d135faedfe34" (UID: "bfff7b8a-803b-4945-90a3-d135faedfe34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.597110 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-config-data" (OuterVolumeSpecName: "config-data") pod "bfff7b8a-803b-4945-90a3-d135faedfe34" (UID: "bfff7b8a-803b-4945-90a3-d135faedfe34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.602178 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.602212 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsz4l\" (UniqueName: \"kubernetes.io/projected/bfff7b8a-803b-4945-90a3-d135faedfe34-kube-api-access-dsz4l\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.602224 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.602235 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.602244 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfff7b8a-803b-4945-90a3-d135faedfe34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:27 crc kubenswrapper[4903]: I0128 16:07:27.602252 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfff7b8a-803b-4945-90a3-d135faedfe34-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.230164 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="993067151bbc38bd867efd2a0048a350ec2c3e1b2fa7b3b79554189c276ba379" exitCode=0 Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.230370 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"993067151bbc38bd867efd2a0048a350ec2c3e1b2fa7b3b79554189c276ba379"} Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.230553 4903 scope.go:117] "RemoveContainer" containerID="6c77af858064eabcd955be524624cd22b78fb67a11240b85f365bfaee93bd9c0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.233301 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfff7b8a-803b-4945-90a3-d135faedfe34","Type":"ContainerDied","Data":"74f659ed6621b3a9e2851e7dca7cee033009e7a314cc9727ce25aa8d1ec2d9a7"} Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.233393 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.268411 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.275886 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299203 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:07:28 crc kubenswrapper[4903]: E0128 16:07:28.299570 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="proxy-httpd" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299589 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="proxy-httpd" Jan 28 16:07:28 crc kubenswrapper[4903]: E0128 16:07:28.299612 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-central-agent" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299619 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-central-agent" Jan 28 16:07:28 crc kubenswrapper[4903]: E0128 16:07:28.299628 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-notification-agent" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299634 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-notification-agent" Jan 28 16:07:28 crc kubenswrapper[4903]: E0128 16:07:28.299648 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="sg-core" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299653 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="sg-core" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299815 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-notification-agent" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299832 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="sg-core" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299841 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="proxy-httpd" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.299851 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" containerName="ceilometer-central-agent" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.301291 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.307911 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.308232 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.316517 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.423589 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.423693 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-run-httpd\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.423739 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-scripts\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.423841 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.423938 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-config-data\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.423971 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-log-httpd\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.424031 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bd22\" (UniqueName: \"kubernetes.io/projected/a3ac956a-bc7d-4963-94ae-939124d171f0-kube-api-access-5bd22\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.441653 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfff7b8a-803b-4945-90a3-d135faedfe34" path="/var/lib/kubelet/pods/bfff7b8a-803b-4945-90a3-d135faedfe34/volumes" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.528776 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.528831 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-run-httpd\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.528866 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-scripts\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.528955 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.529031 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-config-data\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.529061 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-log-httpd\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.529105 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bd22\" (UniqueName: \"kubernetes.io/projected/a3ac956a-bc7d-4963-94ae-939124d171f0-kube-api-access-5bd22\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.529428 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-run-httpd\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.529795 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-log-httpd\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.531202 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.531552 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.540330 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.543917 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-scripts\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.547686 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-config-data\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.548083 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bd22\" (UniqueName: \"kubernetes.io/projected/a3ac956a-bc7d-4963-94ae-939124d171f0-kube-api-access-5bd22\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.546803 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " pod="openstack/ceilometer-0" Jan 28 16:07:28 crc kubenswrapper[4903]: I0128 16:07:28.619235 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:07:29 crc kubenswrapper[4903]: I0128 16:07:29.092293 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:07:29 crc kubenswrapper[4903]: I0128 16:07:29.243805 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerStarted","Data":"ceef3d8749b1eef68562fa7cf10bc42b2ad70ee82586c311d8962d62c355d283"} Jan 28 16:07:30 crc kubenswrapper[4903]: I0128 16:07:30.397775 4903 scope.go:117] "RemoveContainer" containerID="c3461eeb5e79b143d43fce38089e489d4ea3bc6fdb49c419922bbb6955ba83fd" Jan 28 16:07:30 crc kubenswrapper[4903]: I0128 16:07:30.425590 4903 scope.go:117] "RemoveContainer" containerID="928d84dfd649225fa5c1eb6237f229f51c806cb41bc72d9d06a671ea98cb939e" Jan 28 16:07:30 crc kubenswrapper[4903]: I0128 16:07:30.450377 4903 scope.go:117] "RemoveContainer" containerID="00ea57d1454b0c9e617fc2379c72affc83af76a8b09b95f41f4934d0ab93e9ad" Jan 28 16:07:30 crc kubenswrapper[4903]: I0128 16:07:30.472959 4903 scope.go:117] "RemoveContainer" containerID="bb58e6e7b3fb998dd25e7187d57e812a8ffd22754767eb329724dfb80a9cecfa" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.281735 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c"} Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.401881 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502298 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-combined-ca-bundle\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502360 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-httpd-run\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502407 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-config-data\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502432 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-scripts\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502468 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4mf\" (UniqueName: \"kubernetes.io/projected/c73a1965-ccff-43eb-a317-91ca6e551c4e-kube-api-access-ct4mf\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502607 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-public-tls-certs\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502630 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-logs\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.502665 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c73a1965-ccff-43eb-a317-91ca6e551c4e\" (UID: \"c73a1965-ccff-43eb-a317-91ca6e551c4e\") " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.505973 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.509696 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-logs" (OuterVolumeSpecName: "logs") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.511156 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.512284 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73a1965-ccff-43eb-a317-91ca6e551c4e-kube-api-access-ct4mf" (OuterVolumeSpecName: "kube-api-access-ct4mf") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "kube-api-access-ct4mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.519856 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-scripts" (OuterVolumeSpecName: "scripts") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.538028 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.560320 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-config-data" (OuterVolumeSpecName: "config-data") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.563347 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c73a1965-ccff-43eb-a317-91ca6e551c4e" (UID: "c73a1965-ccff-43eb-a317-91ca6e551c4e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604677 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604715 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604727 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4mf\" (UniqueName: \"kubernetes.io/projected/c73a1965-ccff-43eb-a317-91ca6e551c4e-kube-api-access-ct4mf\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604741 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604752 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604797 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604837 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73a1965-ccff-43eb-a317-91ca6e551c4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.604850 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c73a1965-ccff-43eb-a317-91ca6e551c4e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.626503 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 28 16:07:32 crc kubenswrapper[4903]: I0128 16:07:32.706130 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.290523 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerStarted","Data":"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1"} Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.292675 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c73a1965-ccff-43eb-a317-91ca6e551c4e","Type":"ContainerDied","Data":"93ac5e0cd1e8c630f70dc2f6819515e19ac5872172b0dada2894d9cafd7db0b8"} Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.292754 4903 scope.go:117] "RemoveContainer" containerID="f4190431caf39e1cf62f8df34560e8922fa98469cc57b19abf0293f2b23bc912" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.292702 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.330944 4903 scope.go:117] "RemoveContainer" containerID="6d3f8b7a72c94efad6f7723a41564b3b72e80239ea0797b5173fa8f40d6d1376" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.380630 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.404271 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.416083 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:07:33 crc kubenswrapper[4903]: E0128 16:07:33.416461 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-httpd" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.416480 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-httpd" Jan 28 16:07:33 crc kubenswrapper[4903]: E0128 16:07:33.416514 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-log" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.416522 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-log" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.416702 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-log" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.416729 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" containerName="glance-httpd" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.417626 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.421275 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.424605 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.424858 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.521581 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-logs\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.521647 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-scripts\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.521672 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.521908 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.522024 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.522088 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5m4r\" (UniqueName: \"kubernetes.io/projected/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-kube-api-access-j5m4r\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.522178 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-config-data\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.522412 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.623962 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.624062 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-logs\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.624088 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-scripts\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.624106 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.624146 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.624173 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.624194 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5m4r\" (UniqueName: \"kubernetes.io/projected/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-kube-api-access-j5m4r\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.624216 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-config-data\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.625237 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.625523 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-logs\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.625733 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.628655 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.630144 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.630905 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-config-data\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.633592 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-scripts\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.653317 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5m4r\" (UniqueName: \"kubernetes.io/projected/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-kube-api-access-j5m4r\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.670249 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " pod="openstack/glance-default-external-api-0" Jan 28 16:07:33 crc kubenswrapper[4903]: I0128 16:07:33.744876 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:07:34 crc kubenswrapper[4903]: I0128 16:07:34.303738 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerStarted","Data":"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d"} Jan 28 16:07:34 crc kubenswrapper[4903]: I0128 16:07:34.405425 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:07:34 crc kubenswrapper[4903]: I0128 16:07:34.424186 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73a1965-ccff-43eb-a317-91ca6e551c4e" path="/var/lib/kubelet/pods/c73a1965-ccff-43eb-a317-91ca6e551c4e/volumes" Jan 28 16:07:35 crc kubenswrapper[4903]: I0128 16:07:35.320493 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd","Type":"ContainerStarted","Data":"09c605d6038ace2063cd36abb755adc5f02bf5408e796a180094c2237ab62208"} Jan 28 16:07:35 crc kubenswrapper[4903]: I0128 16:07:35.321093 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd","Type":"ContainerStarted","Data":"37f0decf149697fce841b3da8028a302c15a072726751463f73d11e364a82070"} Jan 28 16:07:35 crc kubenswrapper[4903]: I0128 16:07:35.325543 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerStarted","Data":"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9"} Jan 28 16:07:36 crc kubenswrapper[4903]: I0128 16:07:36.347727 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd","Type":"ContainerStarted","Data":"5294340766b49118b122c18adf127768d2b7a2248eea8752adcf1bf834f406c1"} Jan 28 16:07:36 crc kubenswrapper[4903]: I0128 16:07:36.385240 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.385221637 podStartE2EDuration="3.385221637s" podCreationTimestamp="2026-01-28 16:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:07:36.381848564 +0000 UTC m=+1328.657820095" watchObservedRunningTime="2026-01-28 16:07:36.385221637 +0000 UTC m=+1328.661193148" Jan 28 16:07:37 crc kubenswrapper[4903]: I0128 16:07:37.382916 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerStarted","Data":"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6"} Jan 28 16:07:37 crc kubenswrapper[4903]: I0128 16:07:37.383633 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 16:07:37 crc kubenswrapper[4903]: I0128 16:07:37.408811 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.156073675 podStartE2EDuration="9.408789314s" podCreationTimestamp="2026-01-28 16:07:28 +0000 UTC" firstStartedPulling="2026-01-28 16:07:29.096143067 +0000 UTC m=+1321.372114578" lastFinishedPulling="2026-01-28 16:07:36.348858696 +0000 UTC m=+1328.624830217" observedRunningTime="2026-01-28 16:07:37.403761786 +0000 UTC m=+1329.679733297" watchObservedRunningTime="2026-01-28 16:07:37.408789314 +0000 UTC m=+1329.684760825" Jan 28 16:07:43 crc kubenswrapper[4903]: I0128 16:07:43.745669 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 16:07:43 crc kubenswrapper[4903]: I0128 16:07:43.746208 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 16:07:43 crc kubenswrapper[4903]: I0128 16:07:43.776575 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 16:07:43 crc kubenswrapper[4903]: I0128 16:07:43.790142 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 16:07:44 crc kubenswrapper[4903]: I0128 16:07:44.450093 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 16:07:44 crc kubenswrapper[4903]: I0128 16:07:44.450139 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 16:07:46 crc kubenswrapper[4903]: I0128 16:07:46.378488 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 16:07:46 crc kubenswrapper[4903]: I0128 16:07:46.455204 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 16:07:48 crc kubenswrapper[4903]: I0128 16:07:48.486160 4903 generic.go:334] "Generic (PLEG): container finished" podID="94d476df-369e-428e-945d-f2a3dc1a78ea" containerID="be6539b41f3ffef5e41bc2a35bcc5c813d4bb875f67dde5ceb756b4027a5dd69" exitCode=0 Jan 28 16:07:48 crc kubenswrapper[4903]: I0128 16:07:48.486243 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" event={"ID":"94d476df-369e-428e-945d-f2a3dc1a78ea","Type":"ContainerDied","Data":"be6539b41f3ffef5e41bc2a35bcc5c813d4bb875f67dde5ceb756b4027a5dd69"} Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.847116 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.952640 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdzkj\" (UniqueName: \"kubernetes.io/projected/94d476df-369e-428e-945d-f2a3dc1a78ea-kube-api-access-qdzkj\") pod \"94d476df-369e-428e-945d-f2a3dc1a78ea\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.952715 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-combined-ca-bundle\") pod \"94d476df-369e-428e-945d-f2a3dc1a78ea\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.952906 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-scripts\") pod \"94d476df-369e-428e-945d-f2a3dc1a78ea\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.953004 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-config-data\") pod \"94d476df-369e-428e-945d-f2a3dc1a78ea\" (UID: \"94d476df-369e-428e-945d-f2a3dc1a78ea\") " Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.959496 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d476df-369e-428e-945d-f2a3dc1a78ea-kube-api-access-qdzkj" (OuterVolumeSpecName: "kube-api-access-qdzkj") pod "94d476df-369e-428e-945d-f2a3dc1a78ea" (UID: "94d476df-369e-428e-945d-f2a3dc1a78ea"). InnerVolumeSpecName "kube-api-access-qdzkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.961377 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-scripts" (OuterVolumeSpecName: "scripts") pod "94d476df-369e-428e-945d-f2a3dc1a78ea" (UID: "94d476df-369e-428e-945d-f2a3dc1a78ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.992055 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d476df-369e-428e-945d-f2a3dc1a78ea" (UID: "94d476df-369e-428e-945d-f2a3dc1a78ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:49 crc kubenswrapper[4903]: I0128 16:07:49.996678 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-config-data" (OuterVolumeSpecName: "config-data") pod "94d476df-369e-428e-945d-f2a3dc1a78ea" (UID: "94d476df-369e-428e-945d-f2a3dc1a78ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.055180 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdzkj\" (UniqueName: \"kubernetes.io/projected/94d476df-369e-428e-945d-f2a3dc1a78ea-kube-api-access-qdzkj\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.055234 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.055255 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.055273 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d476df-369e-428e-945d-f2a3dc1a78ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.507444 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" event={"ID":"94d476df-369e-428e-945d-f2a3dc1a78ea","Type":"ContainerDied","Data":"5457310cf37aefedf7cf0bdeb45ae66bb2a0ef92a53db00a311c998b61b3731b"} Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.507944 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5457310cf37aefedf7cf0bdeb45ae66bb2a0ef92a53db00a311c998b61b3731b" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.507507 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xvrh9" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.626729 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 16:07:50 crc kubenswrapper[4903]: E0128 16:07:50.627148 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d476df-369e-428e-945d-f2a3dc1a78ea" containerName="nova-cell0-conductor-db-sync" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.627170 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d476df-369e-428e-945d-f2a3dc1a78ea" containerName="nova-cell0-conductor-db-sync" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.627403 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d476df-369e-428e-945d-f2a3dc1a78ea" containerName="nova-cell0-conductor-db-sync" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.628204 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.632828 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.633085 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nvkh6" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.646722 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.769031 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.769086 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.769170 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvtvt\" (UniqueName: \"kubernetes.io/projected/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-kube-api-access-zvtvt\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.870799 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvtvt\" (UniqueName: \"kubernetes.io/projected/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-kube-api-access-zvtvt\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.870981 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.871024 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.877054 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.884588 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.891747 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvtvt\" (UniqueName: \"kubernetes.io/projected/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-kube-api-access-zvtvt\") pod \"nova-cell0-conductor-0\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:50 crc kubenswrapper[4903]: I0128 16:07:50.961155 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:51 crc kubenswrapper[4903]: I0128 16:07:51.463009 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 16:07:51 crc kubenswrapper[4903]: W0128 16:07:51.469655 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d08ed75_05f7_4c45_bc6e_0562a7bbb936.slice/crio-9096665546ada278b46fb5196597e34bca4dd34ea029157595af40b9d81b6f0a WatchSource:0}: Error finding container 9096665546ada278b46fb5196597e34bca4dd34ea029157595af40b9d81b6f0a: Status 404 returned error can't find the container with id 9096665546ada278b46fb5196597e34bca4dd34ea029157595af40b9d81b6f0a Jan 28 16:07:51 crc kubenswrapper[4903]: I0128 16:07:51.517348 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d08ed75-05f7-4c45-bc6e-0562a7bbb936","Type":"ContainerStarted","Data":"9096665546ada278b46fb5196597e34bca4dd34ea029157595af40b9d81b6f0a"} Jan 28 16:07:52 crc kubenswrapper[4903]: I0128 16:07:52.526369 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d08ed75-05f7-4c45-bc6e-0562a7bbb936","Type":"ContainerStarted","Data":"d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e"} Jan 28 16:07:52 crc kubenswrapper[4903]: I0128 16:07:52.526838 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 16:07:52 crc kubenswrapper[4903]: I0128 16:07:52.551509 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.551489219 podStartE2EDuration="2.551489219s" podCreationTimestamp="2026-01-28 16:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:07:52.544235562 +0000 UTC m=+1344.820207103" watchObservedRunningTime="2026-01-28 16:07:52.551489219 +0000 UTC m=+1344.827460730" Jan 28 16:07:58 crc kubenswrapper[4903]: I0128 16:07:58.629481 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 16:08:00 crc kubenswrapper[4903]: I0128 16:08:00.997778 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.791973 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rlbrx"] Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.793743 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.796340 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.796944 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.804169 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rlbrx"] Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.816221 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-config-data\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.816310 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.816437 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-scripts\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.816473 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qxks\" (UniqueName: \"kubernetes.io/projected/af8934da-e18b-43bc-8a6d-11973760064f-kube-api-access-8qxks\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.918446 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-scripts\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.918497 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qxks\" (UniqueName: \"kubernetes.io/projected/af8934da-e18b-43bc-8a6d-11973760064f-kube-api-access-8qxks\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.918596 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-config-data\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.918657 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.927309 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.929931 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-config-data\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.930092 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.933217 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.934757 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-scripts\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.940761 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.966467 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:01 crc kubenswrapper[4903]: I0128 16:08:01.969174 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qxks\" (UniqueName: \"kubernetes.io/projected/af8934da-e18b-43bc-8a6d-11973760064f-kube-api-access-8qxks\") pod \"nova-cell0-cell-mapping-rlbrx\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.105261 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.108039 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.120581 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.122610 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.152602 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.162738 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22ee1756-6329-4378-8d19-965b0d11b3b8-logs\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.163033 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l76pn\" (UniqueName: \"kubernetes.io/projected/22ee1756-6329-4378-8d19-965b0d11b3b8-kube-api-access-l76pn\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.163072 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.163096 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.163166 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.163223 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.163352 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5v69\" (UniqueName: \"kubernetes.io/projected/776e78f3-8a98-48fb-b92a-a56ab4baa23e-kube-api-access-r5v69\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.181399 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.184914 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.191159 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.222587 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.250496 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.251748 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.254948 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.260138 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264552 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264604 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264642 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/309c5093-a146-41a6-b0da-f6da00d2bec8-logs\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264675 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5v69\" (UniqueName: \"kubernetes.io/projected/776e78f3-8a98-48fb-b92a-a56ab4baa23e-kube-api-access-r5v69\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264719 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264755 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22ee1756-6329-4378-8d19-965b0d11b3b8-logs\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264779 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-config-data\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264812 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf69v\" (UniqueName: \"kubernetes.io/projected/309c5093-a146-41a6-b0da-f6da00d2bec8-kube-api-access-pf69v\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264842 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l76pn\" (UniqueName: \"kubernetes.io/projected/22ee1756-6329-4378-8d19-965b0d11b3b8-kube-api-access-l76pn\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264861 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.264879 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.279037 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22ee1756-6329-4378-8d19-965b0d11b3b8-logs\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.283306 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.286344 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-85fkw"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.287847 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.293390 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.303360 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l76pn\" (UniqueName: \"kubernetes.io/projected/22ee1756-6329-4378-8d19-965b0d11b3b8-kube-api-access-l76pn\") pod \"nova-api-0\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.304090 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.309857 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-85fkw"] Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.320479 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.334552 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5v69\" (UniqueName: \"kubernetes.io/projected/776e78f3-8a98-48fb-b92a-a56ab4baa23e-kube-api-access-r5v69\") pod \"nova-cell1-novncproxy-0\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.376459 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/309c5093-a146-41a6-b0da-f6da00d2bec8-logs\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377047 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377084 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377127 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377232 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377278 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-config\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377307 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-config-data\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377332 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsnnj\" (UniqueName: \"kubernetes.io/projected/87d7630a-9844-454d-86d4-30d4da86519b-kube-api-access-zsnnj\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377394 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-config-data\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377482 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf69v\" (UniqueName: \"kubernetes.io/projected/309c5093-a146-41a6-b0da-f6da00d2bec8-kube-api-access-pf69v\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377555 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377592 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.377658 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hs9d\" (UniqueName: \"kubernetes.io/projected/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-kube-api-access-2hs9d\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.379013 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/309c5093-a146-41a6-b0da-f6da00d2bec8-logs\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.382787 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.393392 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.400139 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-config-data\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.415147 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf69v\" (UniqueName: \"kubernetes.io/projected/309c5093-a146-41a6-b0da-f6da00d2bec8-kube-api-access-pf69v\") pod \"nova-metadata-0\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.418348 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.442344 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479156 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-config\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479220 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-config-data\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479250 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsnnj\" (UniqueName: \"kubernetes.io/projected/87d7630a-9844-454d-86d4-30d4da86519b-kube-api-access-zsnnj\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479376 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479404 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479448 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hs9d\" (UniqueName: \"kubernetes.io/projected/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-kube-api-access-2hs9d\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479606 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479640 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.479666 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.485139 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.485621 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-config\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.486087 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.486454 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.488819 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.492477 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.497150 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-config-data\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.506389 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hs9d\" (UniqueName: \"kubernetes.io/projected/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-kube-api-access-2hs9d\") pod \"dnsmasq-dns-557bbc7df7-85fkw\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.506450 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsnnj\" (UniqueName: \"kubernetes.io/projected/87d7630a-9844-454d-86d4-30d4da86519b-kube-api-access-zsnnj\") pod \"nova-scheduler-0\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.742290 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.776066 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:02 crc kubenswrapper[4903]: I0128 16:08:02.920626 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rlbrx"] Jan 28 16:08:02 crc kubenswrapper[4903]: W0128 16:08:02.952840 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8934da_e18b_43bc_8a6d_11973760064f.slice/crio-25fdf8c96070bf77324058bfc5423917a0247af66c913ca515949c73727fa14c WatchSource:0}: Error finding container 25fdf8c96070bf77324058bfc5423917a0247af66c913ca515949c73727fa14c: Status 404 returned error can't find the container with id 25fdf8c96070bf77324058bfc5423917a0247af66c913ca515949c73727fa14c Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.077319 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6jgbm"] Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.079482 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.087935 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6jgbm"] Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.088557 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.088801 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.100756 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.100825 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-config-data\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.100924 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-scripts\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.100976 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnkkh\" (UniqueName: \"kubernetes.io/projected/1cff8440-59d9-4491-ae2e-2568b28d8ae3-kube-api-access-rnkkh\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.160609 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.191704 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.202316 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-scripts\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.203306 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnkkh\" (UniqueName: \"kubernetes.io/projected/1cff8440-59d9-4491-ae2e-2568b28d8ae3-kube-api-access-rnkkh\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.203479 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.203611 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-config-data\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: W0128 16:08:03.208820 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod309c5093_a146_41a6_b0da_f6da00d2bec8.slice/crio-d238c6ff8f96bb2e10a321ce400891fc531e6a16ec04647aafbe1857306558e3 WatchSource:0}: Error finding container d238c6ff8f96bb2e10a321ce400891fc531e6a16ec04647aafbe1857306558e3: Status 404 returned error can't find the container with id d238c6ff8f96bb2e10a321ce400891fc531e6a16ec04647aafbe1857306558e3 Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.210014 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.211391 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.215129 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-scripts\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.215950 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-config-data\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: W0128 16:08:03.216056 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod776e78f3_8a98_48fb_b92a_a56ab4baa23e.slice/crio-3216fee2b93dbbe38760f108c9b2c660d35baca7dead00be55aabec8f1647005 WatchSource:0}: Error finding container 3216fee2b93dbbe38760f108c9b2c660d35baca7dead00be55aabec8f1647005: Status 404 returned error can't find the container with id 3216fee2b93dbbe38760f108c9b2c660d35baca7dead00be55aabec8f1647005 Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.220680 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnkkh\" (UniqueName: \"kubernetes.io/projected/1cff8440-59d9-4491-ae2e-2568b28d8ae3-kube-api-access-rnkkh\") pod \"nova-cell1-conductor-db-sync-6jgbm\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.385398 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-85fkw"] Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.436413 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.461774 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.664986 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" event={"ID":"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd","Type":"ContainerStarted","Data":"9392e2d2e84df73b72b11a01e286c57679919ed0b66076115429fef4b16ee0b9"} Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.669264 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rlbrx" event={"ID":"af8934da-e18b-43bc-8a6d-11973760064f","Type":"ContainerStarted","Data":"a2b24315b8f846b0c4f8ca5e92f63fee6c13fd076e3a020d8137457512b1940e"} Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.669330 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rlbrx" event={"ID":"af8934da-e18b-43bc-8a6d-11973760064f","Type":"ContainerStarted","Data":"25fdf8c96070bf77324058bfc5423917a0247af66c913ca515949c73727fa14c"} Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.679318 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"776e78f3-8a98-48fb-b92a-a56ab4baa23e","Type":"ContainerStarted","Data":"3216fee2b93dbbe38760f108c9b2c660d35baca7dead00be55aabec8f1647005"} Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.690911 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rlbrx" podStartSLOduration=2.690886007 podStartE2EDuration="2.690886007s" podCreationTimestamp="2026-01-28 16:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:03.689786476 +0000 UTC m=+1355.965757987" watchObservedRunningTime="2026-01-28 16:08:03.690886007 +0000 UTC m=+1355.966857518" Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.693276 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22ee1756-6329-4378-8d19-965b0d11b3b8","Type":"ContainerStarted","Data":"5dd56cb037b31d7e66d5097eb0c400da6714f5f312d29098fcc3c14bc88a73b7"} Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.697796 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"309c5093-a146-41a6-b0da-f6da00d2bec8","Type":"ContainerStarted","Data":"d238c6ff8f96bb2e10a321ce400891fc531e6a16ec04647aafbe1857306558e3"} Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.736710 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87d7630a-9844-454d-86d4-30d4da86519b","Type":"ContainerStarted","Data":"9d0735f5a5859d6cad14230f27179432808d2bca3be0d5876cbcef27aff9a3a4"} Jan 28 16:08:03 crc kubenswrapper[4903]: I0128 16:08:03.788136 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6jgbm"] Jan 28 16:08:04 crc kubenswrapper[4903]: I0128 16:08:04.752228 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" event={"ID":"1cff8440-59d9-4491-ae2e-2568b28d8ae3","Type":"ContainerStarted","Data":"6277976b066e33086b843796112a47cd3c785a0a906a6cc042e802b71a70d947"} Jan 28 16:08:04 crc kubenswrapper[4903]: I0128 16:08:04.752908 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" event={"ID":"1cff8440-59d9-4491-ae2e-2568b28d8ae3","Type":"ContainerStarted","Data":"6ba150103847121362692c8e5ed35691340b8699941a87779d280a40ab8f2f75"} Jan 28 16:08:04 crc kubenswrapper[4903]: I0128 16:08:04.757060 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" event={"ID":"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd","Type":"ContainerDied","Data":"459568d7715070f82d1d07692a88852d708864004ce8865d8c244320b036fa82"} Jan 28 16:08:04 crc kubenswrapper[4903]: I0128 16:08:04.758423 4903 generic.go:334] "Generic (PLEG): container finished" podID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerID="459568d7715070f82d1d07692a88852d708864004ce8865d8c244320b036fa82" exitCode=0 Jan 28 16:08:04 crc kubenswrapper[4903]: I0128 16:08:04.768620 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" podStartSLOduration=1.768602268 podStartE2EDuration="1.768602268s" podCreationTimestamp="2026-01-28 16:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:04.767347665 +0000 UTC m=+1357.043319176" watchObservedRunningTime="2026-01-28 16:08:04.768602268 +0000 UTC m=+1357.044573779" Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.078664 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.101301 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.315741 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.315954 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="30b00809-4c91-4c35-b54a-46b5092fdc87" containerName="kube-state-metrics" containerID="cri-o://3ac4a9d51af634e41ab5f731ba48387b5e2e0cad4f76dfa9914df21ad083c9a5" gracePeriod=30 Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.412406 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="30b00809-4c91-4c35-b54a-46b5092fdc87" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": dial tcp 10.217.0.107:8081: connect: connection refused" Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.813554 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"776e78f3-8a98-48fb-b92a-a56ab4baa23e","Type":"ContainerStarted","Data":"b0255b57a675117144280410a3cc8cc8f9253a8332aa44feaff2f186b1d47758"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.814223 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="776e78f3-8a98-48fb-b92a-a56ab4baa23e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b0255b57a675117144280410a3cc8cc8f9253a8332aa44feaff2f186b1d47758" gracePeriod=30 Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.828940 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22ee1756-6329-4378-8d19-965b0d11b3b8","Type":"ContainerStarted","Data":"886ff9a67b3e067958daeb5c4251e2ed5fcaee5a4c9cdce232e635968d6227e1"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.829008 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22ee1756-6329-4378-8d19-965b0d11b3b8","Type":"ContainerStarted","Data":"6be771bdb7dc314a745d84f9f6abf4088696fe523d9425e0f8e0d8fb76839497"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.851836 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"309c5093-a146-41a6-b0da-f6da00d2bec8","Type":"ContainerStarted","Data":"b6c753060fc37429d6df0849adc55674f2f1d9fd058720bf0ad6a6bf1803a871"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.851892 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"309c5093-a146-41a6-b0da-f6da00d2bec8","Type":"ContainerStarted","Data":"06ec5bf78244073fbc75582f2d763a541cf3c9382c354bce90fc029828ab6d27"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.852046 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-log" containerID="cri-o://06ec5bf78244073fbc75582f2d763a541cf3c9382c354bce90fc029828ab6d27" gracePeriod=30 Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.852424 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-metadata" containerID="cri-o://b6c753060fc37429d6df0849adc55674f2f1d9fd058720bf0ad6a6bf1803a871" gracePeriod=30 Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.863313 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87d7630a-9844-454d-86d4-30d4da86519b","Type":"ContainerStarted","Data":"2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.865442 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.381636507 podStartE2EDuration="6.865421086s" podCreationTimestamp="2026-01-28 16:08:01 +0000 UTC" firstStartedPulling="2026-01-28 16:08:03.22101912 +0000 UTC m=+1355.496990631" lastFinishedPulling="2026-01-28 16:08:06.704803699 +0000 UTC m=+1358.980775210" observedRunningTime="2026-01-28 16:08:07.852767381 +0000 UTC m=+1360.128738892" watchObservedRunningTime="2026-01-28 16:08:07.865421086 +0000 UTC m=+1360.141392597" Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.867679 4903 generic.go:334] "Generic (PLEG): container finished" podID="30b00809-4c91-4c35-b54a-46b5092fdc87" containerID="3ac4a9d51af634e41ab5f731ba48387b5e2e0cad4f76dfa9914df21ad083c9a5" exitCode=2 Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.867746 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30b00809-4c91-4c35-b54a-46b5092fdc87","Type":"ContainerDied","Data":"3ac4a9d51af634e41ab5f731ba48387b5e2e0cad4f76dfa9914df21ad083c9a5"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.886159 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" event={"ID":"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd","Type":"ContainerStarted","Data":"6d6d1771a09a377962155d33bf5389253d5717c79cf93cba1a218f8eb08c3def"} Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.887161 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.907723 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.417948419 podStartE2EDuration="5.90770177s" podCreationTimestamp="2026-01-28 16:08:02 +0000 UTC" firstStartedPulling="2026-01-28 16:08:03.21514235 +0000 UTC m=+1355.491113861" lastFinishedPulling="2026-01-28 16:08:06.704895701 +0000 UTC m=+1358.980867212" observedRunningTime="2026-01-28 16:08:07.882155973 +0000 UTC m=+1360.158127484" watchObservedRunningTime="2026-01-28 16:08:07.90770177 +0000 UTC m=+1360.183673281" Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.924727 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.367852595 podStartE2EDuration="6.924689374s" podCreationTimestamp="2026-01-28 16:08:01 +0000 UTC" firstStartedPulling="2026-01-28 16:08:03.151496916 +0000 UTC m=+1355.427468427" lastFinishedPulling="2026-01-28 16:08:06.708333695 +0000 UTC m=+1358.984305206" observedRunningTime="2026-01-28 16:08:07.913040995 +0000 UTC m=+1360.189012506" watchObservedRunningTime="2026-01-28 16:08:07.924689374 +0000 UTC m=+1360.200660895" Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.941345 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" podStartSLOduration=5.941319197 podStartE2EDuration="5.941319197s" podCreationTimestamp="2026-01-28 16:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:07.932710602 +0000 UTC m=+1360.208682113" watchObservedRunningTime="2026-01-28 16:08:07.941319197 +0000 UTC m=+1360.217290718" Jan 28 16:08:07 crc kubenswrapper[4903]: I0128 16:08:07.967313 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.743037901 podStartE2EDuration="5.967291866s" podCreationTimestamp="2026-01-28 16:08:02 +0000 UTC" firstStartedPulling="2026-01-28 16:08:03.480589035 +0000 UTC m=+1355.756560546" lastFinishedPulling="2026-01-28 16:08:06.70484299 +0000 UTC m=+1358.980814511" observedRunningTime="2026-01-28 16:08:07.954063155 +0000 UTC m=+1360.230034666" watchObservedRunningTime="2026-01-28 16:08:07.967291866 +0000 UTC m=+1360.243263377" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.042296 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.241682 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npdkj\" (UniqueName: \"kubernetes.io/projected/30b00809-4c91-4c35-b54a-46b5092fdc87-kube-api-access-npdkj\") pod \"30b00809-4c91-4c35-b54a-46b5092fdc87\" (UID: \"30b00809-4c91-4c35-b54a-46b5092fdc87\") " Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.249723 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b00809-4c91-4c35-b54a-46b5092fdc87-kube-api-access-npdkj" (OuterVolumeSpecName: "kube-api-access-npdkj") pod "30b00809-4c91-4c35-b54a-46b5092fdc87" (UID: "30b00809-4c91-4c35-b54a-46b5092fdc87"). InnerVolumeSpecName "kube-api-access-npdkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.344815 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npdkj\" (UniqueName: \"kubernetes.io/projected/30b00809-4c91-4c35-b54a-46b5092fdc87-kube-api-access-npdkj\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.909520 4903 generic.go:334] "Generic (PLEG): container finished" podID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerID="06ec5bf78244073fbc75582f2d763a541cf3c9382c354bce90fc029828ab6d27" exitCode=143 Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.909566 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"309c5093-a146-41a6-b0da-f6da00d2bec8","Type":"ContainerDied","Data":"06ec5bf78244073fbc75582f2d763a541cf3c9382c354bce90fc029828ab6d27"} Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.911233 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30b00809-4c91-4c35-b54a-46b5092fdc87","Type":"ContainerDied","Data":"bafea61a5848b168e4d412bcc8fc7ba3bcb0abb2e92be996f368e21eb608b82f"} Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.911272 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.911272 4903 scope.go:117] "RemoveContainer" containerID="3ac4a9d51af634e41ab5f731ba48387b5e2e0cad4f76dfa9914df21ad083c9a5" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.934165 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.950250 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.964626 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:08:08 crc kubenswrapper[4903]: E0128 16:08:08.965103 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b00809-4c91-4c35-b54a-46b5092fdc87" containerName="kube-state-metrics" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.965122 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b00809-4c91-4c35-b54a-46b5092fdc87" containerName="kube-state-metrics" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.965359 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b00809-4c91-4c35-b54a-46b5092fdc87" containerName="kube-state-metrics" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.966096 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.969772 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.969948 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 16:08:08 crc kubenswrapper[4903]: I0128 16:08:08.990179 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.061498 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2q9\" (UniqueName: \"kubernetes.io/projected/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-api-access-rj2q9\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.061614 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.061737 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.061779 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.162959 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2q9\" (UniqueName: \"kubernetes.io/projected/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-api-access-rj2q9\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.163282 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.163352 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.163379 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.178474 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.179333 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.182717 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.185454 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2q9\" (UniqueName: \"kubernetes.io/projected/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-api-access-rj2q9\") pod \"kube-state-metrics-0\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.283245 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.596827 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.597332 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-central-agent" containerID="cri-o://e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1" gracePeriod=30 Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.597367 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="proxy-httpd" containerID="cri-o://a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6" gracePeriod=30 Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.597440 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="sg-core" containerID="cri-o://e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9" gracePeriod=30 Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.597475 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-notification-agent" containerID="cri-o://774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d" gracePeriod=30 Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.739166 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.920141 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba","Type":"ContainerStarted","Data":"4a33692bb65b9d9eb87a3fae88a43f812fd25ff67ac886ded3b66b0a56dc0076"} Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.926157 4903 generic.go:334] "Generic (PLEG): container finished" podID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerID="a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6" exitCode=0 Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.926190 4903 generic.go:334] "Generic (PLEG): container finished" podID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerID="e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9" exitCode=2 Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.926228 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerDied","Data":"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6"} Jan 28 16:08:09 crc kubenswrapper[4903]: I0128 16:08:09.926277 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerDied","Data":"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9"} Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.455240 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b00809-4c91-4c35-b54a-46b5092fdc87" path="/var/lib/kubelet/pods/30b00809-4c91-4c35-b54a-46b5092fdc87/volumes" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.619165 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.800279 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-config-data\") pod \"a3ac956a-bc7d-4963-94ae-939124d171f0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.800392 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-log-httpd\") pod \"a3ac956a-bc7d-4963-94ae-939124d171f0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.800409 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bd22\" (UniqueName: \"kubernetes.io/projected/a3ac956a-bc7d-4963-94ae-939124d171f0-kube-api-access-5bd22\") pod \"a3ac956a-bc7d-4963-94ae-939124d171f0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.800595 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-scripts\") pod \"a3ac956a-bc7d-4963-94ae-939124d171f0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.800617 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-combined-ca-bundle\") pod \"a3ac956a-bc7d-4963-94ae-939124d171f0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.800647 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-sg-core-conf-yaml\") pod \"a3ac956a-bc7d-4963-94ae-939124d171f0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.800696 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-run-httpd\") pod \"a3ac956a-bc7d-4963-94ae-939124d171f0\" (UID: \"a3ac956a-bc7d-4963-94ae-939124d171f0\") " Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.801007 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3ac956a-bc7d-4963-94ae-939124d171f0" (UID: "a3ac956a-bc7d-4963-94ae-939124d171f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.801093 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3ac956a-bc7d-4963-94ae-939124d171f0" (UID: "a3ac956a-bc7d-4963-94ae-939124d171f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.801656 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.801681 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3ac956a-bc7d-4963-94ae-939124d171f0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.808918 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-scripts" (OuterVolumeSpecName: "scripts") pod "a3ac956a-bc7d-4963-94ae-939124d171f0" (UID: "a3ac956a-bc7d-4963-94ae-939124d171f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.810827 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ac956a-bc7d-4963-94ae-939124d171f0-kube-api-access-5bd22" (OuterVolumeSpecName: "kube-api-access-5bd22") pod "a3ac956a-bc7d-4963-94ae-939124d171f0" (UID: "a3ac956a-bc7d-4963-94ae-939124d171f0"). InnerVolumeSpecName "kube-api-access-5bd22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.832283 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3ac956a-bc7d-4963-94ae-939124d171f0" (UID: "a3ac956a-bc7d-4963-94ae-939124d171f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.892941 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3ac956a-bc7d-4963-94ae-939124d171f0" (UID: "a3ac956a-bc7d-4963-94ae-939124d171f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.903592 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.903633 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.903648 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.903659 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bd22\" (UniqueName: \"kubernetes.io/projected/a3ac956a-bc7d-4963-94ae-939124d171f0-kube-api-access-5bd22\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.917523 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-config-data" (OuterVolumeSpecName: "config-data") pod "a3ac956a-bc7d-4963-94ae-939124d171f0" (UID: "a3ac956a-bc7d-4963-94ae-939124d171f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.937302 4903 generic.go:334] "Generic (PLEG): container finished" podID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerID="774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d" exitCode=0 Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.937339 4903 generic.go:334] "Generic (PLEG): container finished" podID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerID="e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1" exitCode=0 Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.937350 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerDied","Data":"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d"} Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.937390 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerDied","Data":"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1"} Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.937406 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3ac956a-bc7d-4963-94ae-939124d171f0","Type":"ContainerDied","Data":"ceef3d8749b1eef68562fa7cf10bc42b2ad70ee82586c311d8962d62c355d283"} Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.937425 4903 scope.go:117] "RemoveContainer" containerID="a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.937362 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.939335 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba","Type":"ContainerStarted","Data":"02154f1f0b54e0cfa5dae5ad4eb9c57e22b0da30380c0810c189562bfe3ae25b"} Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.939476 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.963232 4903 scope.go:117] "RemoveContainer" containerID="e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.975403 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.388612576 podStartE2EDuration="2.975377213s" podCreationTimestamp="2026-01-28 16:08:08 +0000 UTC" firstStartedPulling="2026-01-28 16:08:09.758319033 +0000 UTC m=+1362.034290544" lastFinishedPulling="2026-01-28 16:08:10.34508366 +0000 UTC m=+1362.621055181" observedRunningTime="2026-01-28 16:08:10.970671395 +0000 UTC m=+1363.246642926" watchObservedRunningTime="2026-01-28 16:08:10.975377213 +0000 UTC m=+1363.251348744" Jan 28 16:08:10 crc kubenswrapper[4903]: I0128 16:08:10.998346 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.007809 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ac956a-bc7d-4963-94ae-939124d171f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.014488 4903 scope.go:117] "RemoveContainer" containerID="774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.014885 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.027205 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.027662 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-notification-agent" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.027686 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-notification-agent" Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.027718 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="proxy-httpd" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.027726 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="proxy-httpd" Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.027743 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="sg-core" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.027753 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="sg-core" Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.027768 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-central-agent" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.027777 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-central-agent" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.028005 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="sg-core" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.028025 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-notification-agent" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.028042 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="ceilometer-central-agent" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.028054 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" containerName="proxy-httpd" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.030007 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.033134 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.033376 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.033519 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.056988 4903 scope.go:117] "RemoveContainer" containerID="e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.058877 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.090438 4903 scope.go:117] "RemoveContainer" containerID="a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6" Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.091258 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6\": container with ID starting with a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6 not found: ID does not exist" containerID="a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.091300 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6"} err="failed to get container status \"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6\": rpc error: code = NotFound desc = could not find container \"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6\": container with ID starting with a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6 not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.091326 4903 scope.go:117] "RemoveContainer" containerID="e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9" Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.091748 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9\": container with ID starting with e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9 not found: ID does not exist" containerID="e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.091781 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9"} err="failed to get container status \"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9\": rpc error: code = NotFound desc = could not find container \"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9\": container with ID starting with e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9 not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.091795 4903 scope.go:117] "RemoveContainer" containerID="774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d" Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.092140 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d\": container with ID starting with 774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d not found: ID does not exist" containerID="774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.092190 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d"} err="failed to get container status \"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d\": rpc error: code = NotFound desc = could not find container \"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d\": container with ID starting with 774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.092217 4903 scope.go:117] "RemoveContainer" containerID="e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1" Jan 28 16:08:11 crc kubenswrapper[4903]: E0128 16:08:11.092512 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1\": container with ID starting with e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1 not found: ID does not exist" containerID="e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.092620 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1"} err="failed to get container status \"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1\": rpc error: code = NotFound desc = could not find container \"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1\": container with ID starting with e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1 not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.092638 4903 scope.go:117] "RemoveContainer" containerID="a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.092962 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6"} err="failed to get container status \"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6\": rpc error: code = NotFound desc = could not find container \"a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6\": container with ID starting with a849cef9a40384a0f9b29b90d1f5bd9c7effcad984c79a5be32eca8afeb7ffe6 not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.093009 4903 scope.go:117] "RemoveContainer" containerID="e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.093289 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9"} err="failed to get container status \"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9\": rpc error: code = NotFound desc = could not find container \"e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9\": container with ID starting with e16b333fbe0799b3560f5104291aa1911ad6a87a6939dba6a3f384531e3fa7a9 not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.093310 4903 scope.go:117] "RemoveContainer" containerID="774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.093573 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d"} err="failed to get container status \"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d\": rpc error: code = NotFound desc = could not find container \"774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d\": container with ID starting with 774a3b48a6d0c3c214d096855d06da105e282188bf8450c9010281ea3638ec0d not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.093597 4903 scope.go:117] "RemoveContainer" containerID="e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.093938 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1"} err="failed to get container status \"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1\": rpc error: code = NotFound desc = could not find container \"e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1\": container with ID starting with e43b4448477c08b126080455f8155dd19d49b2a5de7569e137b979a5746232c1 not found: ID does not exist" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.211941 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-scripts\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.212019 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-log-httpd\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.212111 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.212265 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.212298 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-run-httpd\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.212323 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-config-data\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.212356 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vlbs\" (UniqueName: \"kubernetes.io/projected/483753d5-378b-4dcf-a462-1fb273e851cc-kube-api-access-4vlbs\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.212414 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314352 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-run-httpd\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314400 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314421 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-config-data\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314449 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vlbs\" (UniqueName: \"kubernetes.io/projected/483753d5-378b-4dcf-a462-1fb273e851cc-kube-api-access-4vlbs\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314507 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314587 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-scripts\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314619 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-log-httpd\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.314641 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.315144 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-run-httpd\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.315480 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-log-httpd\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.318635 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.319246 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.319473 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.320928 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-config-data\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.321303 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-scripts\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.337370 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vlbs\" (UniqueName: \"kubernetes.io/projected/483753d5-378b-4dcf-a462-1fb273e851cc-kube-api-access-4vlbs\") pod \"ceilometer-0\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.366852 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.854254 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:11 crc kubenswrapper[4903]: I0128 16:08:11.951599 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerStarted","Data":"c0014e065823cdde8cb68918533e5ac2bb439791bdd40b915fed5e58ab5093c8"} Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.383905 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.384252 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.423719 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ac956a-bc7d-4963-94ae-939124d171f0" path="/var/lib/kubelet/pods/a3ac956a-bc7d-4963-94ae-939124d171f0/volumes" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.424412 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.424435 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.443350 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.742436 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.742496 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.768692 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.781217 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.850901 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-b7g78"] Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.851577 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" podUID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerName="dnsmasq-dns" containerID="cri-o://d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe" gracePeriod=10 Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.986491 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerStarted","Data":"6208e65787fde5e4a197f4021077ab14af4e2cfe8f6c3dac084a147e070ddc73"} Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.993440 4903 generic.go:334] "Generic (PLEG): container finished" podID="af8934da-e18b-43bc-8a6d-11973760064f" containerID="a2b24315b8f846b0c4f8ca5e92f63fee6c13fd076e3a020d8137457512b1940e" exitCode=0 Jan 28 16:08:12 crc kubenswrapper[4903]: I0128 16:08:12.994374 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rlbrx" event={"ID":"af8934da-e18b-43bc-8a6d-11973760064f","Type":"ContainerDied","Data":"a2b24315b8f846b0c4f8ca5e92f63fee6c13fd076e3a020d8137457512b1940e"} Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.049933 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.468828 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.469126 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.182:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.621148 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.775654 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-sb\") pod \"5885ed8d-0267-41a4-9c88-e9be0091674c\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.775729 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-swift-storage-0\") pod \"5885ed8d-0267-41a4-9c88-e9be0091674c\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.775854 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-svc\") pod \"5885ed8d-0267-41a4-9c88-e9be0091674c\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.775904 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-config\") pod \"5885ed8d-0267-41a4-9c88-e9be0091674c\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.776040 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-nb\") pod \"5885ed8d-0267-41a4-9c88-e9be0091674c\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.776092 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6vn2\" (UniqueName: \"kubernetes.io/projected/5885ed8d-0267-41a4-9c88-e9be0091674c-kube-api-access-c6vn2\") pod \"5885ed8d-0267-41a4-9c88-e9be0091674c\" (UID: \"5885ed8d-0267-41a4-9c88-e9be0091674c\") " Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.812821 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5885ed8d-0267-41a4-9c88-e9be0091674c-kube-api-access-c6vn2" (OuterVolumeSpecName: "kube-api-access-c6vn2") pod "5885ed8d-0267-41a4-9c88-e9be0091674c" (UID: "5885ed8d-0267-41a4-9c88-e9be0091674c"). InnerVolumeSpecName "kube-api-access-c6vn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.880923 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6vn2\" (UniqueName: \"kubernetes.io/projected/5885ed8d-0267-41a4-9c88-e9be0091674c-kube-api-access-c6vn2\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.900006 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5885ed8d-0267-41a4-9c88-e9be0091674c" (UID: "5885ed8d-0267-41a4-9c88-e9be0091674c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.919208 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5885ed8d-0267-41a4-9c88-e9be0091674c" (UID: "5885ed8d-0267-41a4-9c88-e9be0091674c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.920113 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5885ed8d-0267-41a4-9c88-e9be0091674c" (UID: "5885ed8d-0267-41a4-9c88-e9be0091674c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.920826 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5885ed8d-0267-41a4-9c88-e9be0091674c" (UID: "5885ed8d-0267-41a4-9c88-e9be0091674c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.945843 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-config" (OuterVolumeSpecName: "config") pod "5885ed8d-0267-41a4-9c88-e9be0091674c" (UID: "5885ed8d-0267-41a4-9c88-e9be0091674c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.982505 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.982551 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.982562 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.982572 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:13 crc kubenswrapper[4903]: I0128 16:08:13.982583 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5885ed8d-0267-41a4-9c88-e9be0091674c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.004721 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerStarted","Data":"5a24b6f133be8724bb507b4e07921d6f2881f6d7f964099e7ffc67db065083a0"} Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.007928 4903 generic.go:334] "Generic (PLEG): container finished" podID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerID="d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe" exitCode=0 Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.008155 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.010166 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" event={"ID":"5885ed8d-0267-41a4-9c88-e9be0091674c","Type":"ContainerDied","Data":"d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe"} Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.010255 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-b7g78" event={"ID":"5885ed8d-0267-41a4-9c88-e9be0091674c","Type":"ContainerDied","Data":"26ac117d8eb8a3634aa9a14d3927190f6dbeab1e403a53fe85c245c9bc475d81"} Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.010281 4903 scope.go:117] "RemoveContainer" containerID="d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.049305 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-b7g78"] Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.058107 4903 scope.go:117] "RemoveContainer" containerID="60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.058287 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-b7g78"] Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.117797 4903 scope.go:117] "RemoveContainer" containerID="d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe" Jan 28 16:08:14 crc kubenswrapper[4903]: E0128 16:08:14.119101 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe\": container with ID starting with d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe not found: ID does not exist" containerID="d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.119184 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe"} err="failed to get container status \"d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe\": rpc error: code = NotFound desc = could not find container \"d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe\": container with ID starting with d0f2a44225406d37eb31eb1fa3e88865df5b3d4966d6229d8334555ad0a806fe not found: ID does not exist" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.119219 4903 scope.go:117] "RemoveContainer" containerID="60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a" Jan 28 16:08:14 crc kubenswrapper[4903]: E0128 16:08:14.122128 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a\": container with ID starting with 60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a not found: ID does not exist" containerID="60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.122179 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a"} err="failed to get container status \"60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a\": rpc error: code = NotFound desc = could not find container \"60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a\": container with ID starting with 60debd5d1d63511843d29fcc85e9e89fdc840a379686802d85971dc934fae12a not found: ID does not exist" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.429012 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5885ed8d-0267-41a4-9c88-e9be0091674c" path="/var/lib/kubelet/pods/5885ed8d-0267-41a4-9c88-e9be0091674c/volumes" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.441182 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.594631 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-scripts\") pod \"af8934da-e18b-43bc-8a6d-11973760064f\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.594762 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-config-data\") pod \"af8934da-e18b-43bc-8a6d-11973760064f\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.594843 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-combined-ca-bundle\") pod \"af8934da-e18b-43bc-8a6d-11973760064f\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.594961 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qxks\" (UniqueName: \"kubernetes.io/projected/af8934da-e18b-43bc-8a6d-11973760064f-kube-api-access-8qxks\") pod \"af8934da-e18b-43bc-8a6d-11973760064f\" (UID: \"af8934da-e18b-43bc-8a6d-11973760064f\") " Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.602440 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8934da-e18b-43bc-8a6d-11973760064f-kube-api-access-8qxks" (OuterVolumeSpecName: "kube-api-access-8qxks") pod "af8934da-e18b-43bc-8a6d-11973760064f" (UID: "af8934da-e18b-43bc-8a6d-11973760064f"). InnerVolumeSpecName "kube-api-access-8qxks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.627974 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-scripts" (OuterVolumeSpecName: "scripts") pod "af8934da-e18b-43bc-8a6d-11973760064f" (UID: "af8934da-e18b-43bc-8a6d-11973760064f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.628562 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-config-data" (OuterVolumeSpecName: "config-data") pod "af8934da-e18b-43bc-8a6d-11973760064f" (UID: "af8934da-e18b-43bc-8a6d-11973760064f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.628594 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af8934da-e18b-43bc-8a6d-11973760064f" (UID: "af8934da-e18b-43bc-8a6d-11973760064f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.701166 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.701201 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.701213 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8934da-e18b-43bc-8a6d-11973760064f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:14 crc kubenswrapper[4903]: I0128 16:08:14.701225 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qxks\" (UniqueName: \"kubernetes.io/projected/af8934da-e18b-43bc-8a6d-11973760064f-kube-api-access-8qxks\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.017189 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rlbrx" event={"ID":"af8934da-e18b-43bc-8a6d-11973760064f","Type":"ContainerDied","Data":"25fdf8c96070bf77324058bfc5423917a0247af66c913ca515949c73727fa14c"} Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.017514 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25fdf8c96070bf77324058bfc5423917a0247af66c913ca515949c73727fa14c" Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.017436 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rlbrx" Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.019891 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerStarted","Data":"9b72e9c8533bb484a5098b45f3eefd44f36db58f9766a8b98b45742025cd67d5"} Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.131318 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.131569 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-log" containerID="cri-o://6be771bdb7dc314a745d84f9f6abf4088696fe523d9425e0f8e0d8fb76839497" gracePeriod=30 Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.131698 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-api" containerID="cri-o://886ff9a67b3e067958daeb5c4251e2ed5fcaee5a4c9cdce232e635968d6227e1" gracePeriod=30 Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.155432 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:15 crc kubenswrapper[4903]: I0128 16:08:15.155648 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="87d7630a-9844-454d-86d4-30d4da86519b" containerName="nova-scheduler-scheduler" containerID="cri-o://2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0" gracePeriod=30 Jan 28 16:08:16 crc kubenswrapper[4903]: I0128 16:08:16.038008 4903 generic.go:334] "Generic (PLEG): container finished" podID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerID="6be771bdb7dc314a745d84f9f6abf4088696fe523d9425e0f8e0d8fb76839497" exitCode=143 Jan 28 16:08:16 crc kubenswrapper[4903]: I0128 16:08:16.038100 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22ee1756-6329-4378-8d19-965b0d11b3b8","Type":"ContainerDied","Data":"6be771bdb7dc314a745d84f9f6abf4088696fe523d9425e0f8e0d8fb76839497"} Jan 28 16:08:17 crc kubenswrapper[4903]: I0128 16:08:17.049576 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerStarted","Data":"ac49defcc977e6c260d4743e9000a1e960aad0f05447b83c8d3471c0be564349"} Jan 28 16:08:17 crc kubenswrapper[4903]: I0128 16:08:17.050080 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 16:08:17 crc kubenswrapper[4903]: I0128 16:08:17.081672 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.961025332 podStartE2EDuration="7.081650239s" podCreationTimestamp="2026-01-28 16:08:10 +0000 UTC" firstStartedPulling="2026-01-28 16:08:11.865925517 +0000 UTC m=+1364.141897028" lastFinishedPulling="2026-01-28 16:08:15.986550424 +0000 UTC m=+1368.262521935" observedRunningTime="2026-01-28 16:08:17.070805132 +0000 UTC m=+1369.346776643" watchObservedRunningTime="2026-01-28 16:08:17.081650239 +0000 UTC m=+1369.357621750" Jan 28 16:08:17 crc kubenswrapper[4903]: E0128 16:08:17.744966 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:08:17 crc kubenswrapper[4903]: E0128 16:08:17.746616 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:08:17 crc kubenswrapper[4903]: E0128 16:08:17.747931 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:08:17 crc kubenswrapper[4903]: E0128 16:08:17.747972 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="87d7630a-9844-454d-86d4-30d4da86519b" containerName="nova-scheduler-scheduler" Jan 28 16:08:18 crc kubenswrapper[4903]: I0128 16:08:18.060581 4903 generic.go:334] "Generic (PLEG): container finished" podID="1cff8440-59d9-4491-ae2e-2568b28d8ae3" containerID="6277976b066e33086b843796112a47cd3c785a0a906a6cc042e802b71a70d947" exitCode=0 Jan 28 16:08:18 crc kubenswrapper[4903]: I0128 16:08:18.060620 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" event={"ID":"1cff8440-59d9-4491-ae2e-2568b28d8ae3","Type":"ContainerDied","Data":"6277976b066e33086b843796112a47cd3c785a0a906a6cc042e802b71a70d947"} Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.298546 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.444813 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.584945 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnkkh\" (UniqueName: \"kubernetes.io/projected/1cff8440-59d9-4491-ae2e-2568b28d8ae3-kube-api-access-rnkkh\") pod \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.585001 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-combined-ca-bundle\") pod \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.585025 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-scripts\") pod \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.585089 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-config-data\") pod \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\" (UID: \"1cff8440-59d9-4491-ae2e-2568b28d8ae3\") " Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.601431 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-scripts" (OuterVolumeSpecName: "scripts") pod "1cff8440-59d9-4491-ae2e-2568b28d8ae3" (UID: "1cff8440-59d9-4491-ae2e-2568b28d8ae3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.608787 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cff8440-59d9-4491-ae2e-2568b28d8ae3-kube-api-access-rnkkh" (OuterVolumeSpecName: "kube-api-access-rnkkh") pod "1cff8440-59d9-4491-ae2e-2568b28d8ae3" (UID: "1cff8440-59d9-4491-ae2e-2568b28d8ae3"). InnerVolumeSpecName "kube-api-access-rnkkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.630273 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-config-data" (OuterVolumeSpecName: "config-data") pod "1cff8440-59d9-4491-ae2e-2568b28d8ae3" (UID: "1cff8440-59d9-4491-ae2e-2568b28d8ae3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.637764 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cff8440-59d9-4491-ae2e-2568b28d8ae3" (UID: "1cff8440-59d9-4491-ae2e-2568b28d8ae3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.687085 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.687337 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnkkh\" (UniqueName: \"kubernetes.io/projected/1cff8440-59d9-4491-ae2e-2568b28d8ae3-kube-api-access-rnkkh\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.687455 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:19 crc kubenswrapper[4903]: I0128 16:08:19.687567 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cff8440-59d9-4491-ae2e-2568b28d8ae3-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.085622 4903 generic.go:334] "Generic (PLEG): container finished" podID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerID="886ff9a67b3e067958daeb5c4251e2ed5fcaee5a4c9cdce232e635968d6227e1" exitCode=0 Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.085694 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22ee1756-6329-4378-8d19-965b0d11b3b8","Type":"ContainerDied","Data":"886ff9a67b3e067958daeb5c4251e2ed5fcaee5a4c9cdce232e635968d6227e1"} Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.096261 4903 generic.go:334] "Generic (PLEG): container finished" podID="87d7630a-9844-454d-86d4-30d4da86519b" containerID="2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0" exitCode=0 Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.096349 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87d7630a-9844-454d-86d4-30d4da86519b","Type":"ContainerDied","Data":"2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0"} Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.097806 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" event={"ID":"1cff8440-59d9-4491-ae2e-2568b28d8ae3","Type":"ContainerDied","Data":"6ba150103847121362692c8e5ed35691340b8699941a87779d280a40ab8f2f75"} Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.097835 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ba150103847121362692c8e5ed35691340b8699941a87779d280a40ab8f2f75" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.097886 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6jgbm" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.117672 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203154 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 16:08:20 crc kubenswrapper[4903]: E0128 16:08:20.203576 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerName="dnsmasq-dns" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203596 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerName="dnsmasq-dns" Jan 28 16:08:20 crc kubenswrapper[4903]: E0128 16:08:20.203609 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-api" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203615 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-api" Jan 28 16:08:20 crc kubenswrapper[4903]: E0128 16:08:20.203631 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8934da-e18b-43bc-8a6d-11973760064f" containerName="nova-manage" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203638 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8934da-e18b-43bc-8a6d-11973760064f" containerName="nova-manage" Jan 28 16:08:20 crc kubenswrapper[4903]: E0128 16:08:20.203648 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerName="init" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203653 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerName="init" Jan 28 16:08:20 crc kubenswrapper[4903]: E0128 16:08:20.203667 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cff8440-59d9-4491-ae2e-2568b28d8ae3" containerName="nova-cell1-conductor-db-sync" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203673 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cff8440-59d9-4491-ae2e-2568b28d8ae3" containerName="nova-cell1-conductor-db-sync" Jan 28 16:08:20 crc kubenswrapper[4903]: E0128 16:08:20.203687 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-log" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203692 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-log" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203929 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cff8440-59d9-4491-ae2e-2568b28d8ae3" containerName="nova-cell1-conductor-db-sync" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203948 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="5885ed8d-0267-41a4-9c88-e9be0091674c" containerName="dnsmasq-dns" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203972 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8934da-e18b-43bc-8a6d-11973760064f" containerName="nova-manage" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203981 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-log" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.203993 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" containerName="nova-api-api" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.204686 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.208674 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.229237 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.304901 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data\") pod \"22ee1756-6329-4378-8d19-965b0d11b3b8\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.304968 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22ee1756-6329-4378-8d19-965b0d11b3b8-logs\") pod \"22ee1756-6329-4378-8d19-965b0d11b3b8\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.305217 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l76pn\" (UniqueName: \"kubernetes.io/projected/22ee1756-6329-4378-8d19-965b0d11b3b8-kube-api-access-l76pn\") pod \"22ee1756-6329-4378-8d19-965b0d11b3b8\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.305248 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-combined-ca-bundle\") pod \"22ee1756-6329-4378-8d19-965b0d11b3b8\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.305971 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ee1756-6329-4378-8d19-965b0d11b3b8-logs" (OuterVolumeSpecName: "logs") pod "22ee1756-6329-4378-8d19-965b0d11b3b8" (UID: "22ee1756-6329-4378-8d19-965b0d11b3b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.309623 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ee1756-6329-4378-8d19-965b0d11b3b8-kube-api-access-l76pn" (OuterVolumeSpecName: "kube-api-access-l76pn") pod "22ee1756-6329-4378-8d19-965b0d11b3b8" (UID: "22ee1756-6329-4378-8d19-965b0d11b3b8"). InnerVolumeSpecName "kube-api-access-l76pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:20 crc kubenswrapper[4903]: E0128 16:08:20.333994 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data podName:22ee1756-6329-4378-8d19-965b0d11b3b8 nodeName:}" failed. No retries permitted until 2026-01-28 16:08:20.833919257 +0000 UTC m=+1373.109890768 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data") pod "22ee1756-6329-4378-8d19-965b0d11b3b8" (UID: "22ee1756-6329-4378-8d19-965b0d11b3b8") : error deleting /var/lib/kubelet/pods/22ee1756-6329-4378-8d19-965b0d11b3b8/volume-subpaths: remove /var/lib/kubelet/pods/22ee1756-6329-4378-8d19-965b0d11b3b8/volume-subpaths: no such file or directory Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.337498 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22ee1756-6329-4378-8d19-965b0d11b3b8" (UID: "22ee1756-6329-4378-8d19-965b0d11b3b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.407155 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.407205 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.407238 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fjp\" (UniqueName: \"kubernetes.io/projected/d3c39267-5b08-4783-b267-7ee6395020f2-kube-api-access-k7fjp\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.407288 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l76pn\" (UniqueName: \"kubernetes.io/projected/22ee1756-6329-4378-8d19-965b0d11b3b8-kube-api-access-l76pn\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.407302 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.407311 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22ee1756-6329-4378-8d19-965b0d11b3b8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.508559 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7fjp\" (UniqueName: \"kubernetes.io/projected/d3c39267-5b08-4783-b267-7ee6395020f2-kube-api-access-k7fjp\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.508776 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.508811 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.513840 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.514464 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.529262 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7fjp\" (UniqueName: \"kubernetes.io/projected/d3c39267-5b08-4783-b267-7ee6395020f2-kube-api-access-k7fjp\") pod \"nova-cell1-conductor-0\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.545594 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.624809 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.815189 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-config-data\") pod \"87d7630a-9844-454d-86d4-30d4da86519b\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.815256 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsnnj\" (UniqueName: \"kubernetes.io/projected/87d7630a-9844-454d-86d4-30d4da86519b-kube-api-access-zsnnj\") pod \"87d7630a-9844-454d-86d4-30d4da86519b\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.816132 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-combined-ca-bundle\") pod \"87d7630a-9844-454d-86d4-30d4da86519b\" (UID: \"87d7630a-9844-454d-86d4-30d4da86519b\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.820382 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d7630a-9844-454d-86d4-30d4da86519b-kube-api-access-zsnnj" (OuterVolumeSpecName: "kube-api-access-zsnnj") pod "87d7630a-9844-454d-86d4-30d4da86519b" (UID: "87d7630a-9844-454d-86d4-30d4da86519b"). InnerVolumeSpecName "kube-api-access-zsnnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.850313 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-config-data" (OuterVolumeSpecName: "config-data") pod "87d7630a-9844-454d-86d4-30d4da86519b" (UID: "87d7630a-9844-454d-86d4-30d4da86519b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.882706 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87d7630a-9844-454d-86d4-30d4da86519b" (UID: "87d7630a-9844-454d-86d4-30d4da86519b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.917653 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data\") pod \"22ee1756-6329-4378-8d19-965b0d11b3b8\" (UID: \"22ee1756-6329-4378-8d19-965b0d11b3b8\") " Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.918243 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.918261 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsnnj\" (UniqueName: \"kubernetes.io/projected/87d7630a-9844-454d-86d4-30d4da86519b-kube-api-access-zsnnj\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.918273 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87d7630a-9844-454d-86d4-30d4da86519b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:20 crc kubenswrapper[4903]: I0128 16:08:20.920459 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data" (OuterVolumeSpecName: "config-data") pod "22ee1756-6329-4378-8d19-965b0d11b3b8" (UID: "22ee1756-6329-4378-8d19-965b0d11b3b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.018964 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22ee1756-6329-4378-8d19-965b0d11b3b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.109132 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.109147 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"22ee1756-6329-4378-8d19-965b0d11b3b8","Type":"ContainerDied","Data":"5dd56cb037b31d7e66d5097eb0c400da6714f5f312d29098fcc3c14bc88a73b7"} Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.109208 4903 scope.go:117] "RemoveContainer" containerID="886ff9a67b3e067958daeb5c4251e2ed5fcaee5a4c9cdce232e635968d6227e1" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.115552 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.116588 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87d7630a-9844-454d-86d4-30d4da86519b","Type":"ContainerDied","Data":"9d0735f5a5859d6cad14230f27179432808d2bca3be0d5876cbcef27aff9a3a4"} Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.116672 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.154798 4903 scope.go:117] "RemoveContainer" containerID="6be771bdb7dc314a745d84f9f6abf4088696fe523d9425e0f8e0d8fb76839497" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.164334 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.192609 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.214748 4903 scope.go:117] "RemoveContainer" containerID="2b2b4461bc93f47e510dc210c96e4937d503602bb33f2547df978783f15ff4d0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.215668 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.259912 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.279672 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: E0128 16:08:21.280597 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d7630a-9844-454d-86d4-30d4da86519b" containerName="nova-scheduler-scheduler" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.280618 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d7630a-9844-454d-86d4-30d4da86519b" containerName="nova-scheduler-scheduler" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.281063 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d7630a-9844-454d-86d4-30d4da86519b" containerName="nova-scheduler-scheduler" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.284455 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.315397 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.317734 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.324455 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.328205 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.329206 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.367907 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.459373 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-config-data\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.459983 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.460023 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9209cacf-cf97-4251-9d2b-e4279be66d79-logs\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.460622 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-config-data\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.460764 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwglm\" (UniqueName: \"kubernetes.io/projected/d5dad84c-f09f-4430-90cc-febd017d6f72-kube-api-access-fwglm\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.460913 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.461247 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ztg\" (UniqueName: \"kubernetes.io/projected/9209cacf-cf97-4251-9d2b-e4279be66d79-kube-api-access-l4ztg\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.562339 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9209cacf-cf97-4251-9d2b-e4279be66d79-logs\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.562703 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-config-data\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.562742 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwglm\" (UniqueName: \"kubernetes.io/projected/d5dad84c-f09f-4430-90cc-febd017d6f72-kube-api-access-fwglm\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.562771 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9209cacf-cf97-4251-9d2b-e4279be66d79-logs\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.562785 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.562804 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4ztg\" (UniqueName: \"kubernetes.io/projected/9209cacf-cf97-4251-9d2b-e4279be66d79-kube-api-access-l4ztg\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.562867 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-config-data\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.563147 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.567674 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.568094 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-config-data\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.568895 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-config-data\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.580279 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwglm\" (UniqueName: \"kubernetes.io/projected/d5dad84c-f09f-4430-90cc-febd017d6f72-kube-api-access-fwglm\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.582061 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4ztg\" (UniqueName: \"kubernetes.io/projected/9209cacf-cf97-4251-9d2b-e4279be66d79-kube-api-access-l4ztg\") pod \"nova-api-0\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.583455 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " pod="openstack/nova-scheduler-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.624507 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:21 crc kubenswrapper[4903]: I0128 16:08:21.784204 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.120207 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:22 crc kubenswrapper[4903]: W0128 16:08:22.121976 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9209cacf_cf97_4251_9d2b_e4279be66d79.slice/crio-e0a67e307efae6f84350426f87232c4a696cc45af6cd513a592cdd5f704e46ef WatchSource:0}: Error finding container e0a67e307efae6f84350426f87232c4a696cc45af6cd513a592cdd5f704e46ef: Status 404 returned error can't find the container with id e0a67e307efae6f84350426f87232c4a696cc45af6cd513a592cdd5f704e46ef Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.151077 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3c39267-5b08-4783-b267-7ee6395020f2","Type":"ContainerStarted","Data":"632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049"} Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.151137 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3c39267-5b08-4783-b267-7ee6395020f2","Type":"ContainerStarted","Data":"fc097a85e4b1cf5329f5d0e557314ca12ba9c6a72baee322bebac95f9e836bda"} Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.152594 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.175790 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.175729471 podStartE2EDuration="2.175729471s" podCreationTimestamp="2026-01-28 16:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:22.170720164 +0000 UTC m=+1374.446691695" watchObservedRunningTime="2026-01-28 16:08:22.175729471 +0000 UTC m=+1374.451700982" Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.287221 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:08:22 crc kubenswrapper[4903]: W0128 16:08:22.296183 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5dad84c_f09f_4430_90cc_febd017d6f72.slice/crio-c440295c7bf20bb14265267534d85f30df200a6e6a65fc8d1e10c49f59656021 WatchSource:0}: Error finding container c440295c7bf20bb14265267534d85f30df200a6e6a65fc8d1e10c49f59656021: Status 404 returned error can't find the container with id c440295c7bf20bb14265267534d85f30df200a6e6a65fc8d1e10c49f59656021 Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.427215 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22ee1756-6329-4378-8d19-965b0d11b3b8" path="/var/lib/kubelet/pods/22ee1756-6329-4378-8d19-965b0d11b3b8/volumes" Jan 28 16:08:22 crc kubenswrapper[4903]: I0128 16:08:22.427899 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d7630a-9844-454d-86d4-30d4da86519b" path="/var/lib/kubelet/pods/87d7630a-9844-454d-86d4-30d4da86519b/volumes" Jan 28 16:08:23 crc kubenswrapper[4903]: I0128 16:08:23.160247 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5dad84c-f09f-4430-90cc-febd017d6f72","Type":"ContainerStarted","Data":"3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6"} Jan 28 16:08:23 crc kubenswrapper[4903]: I0128 16:08:23.160567 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5dad84c-f09f-4430-90cc-febd017d6f72","Type":"ContainerStarted","Data":"c440295c7bf20bb14265267534d85f30df200a6e6a65fc8d1e10c49f59656021"} Jan 28 16:08:23 crc kubenswrapper[4903]: I0128 16:08:23.162302 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9209cacf-cf97-4251-9d2b-e4279be66d79","Type":"ContainerStarted","Data":"4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409"} Jan 28 16:08:23 crc kubenswrapper[4903]: I0128 16:08:23.162367 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9209cacf-cf97-4251-9d2b-e4279be66d79","Type":"ContainerStarted","Data":"93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f"} Jan 28 16:08:23 crc kubenswrapper[4903]: I0128 16:08:23.162387 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9209cacf-cf97-4251-9d2b-e4279be66d79","Type":"ContainerStarted","Data":"e0a67e307efae6f84350426f87232c4a696cc45af6cd513a592cdd5f704e46ef"} Jan 28 16:08:23 crc kubenswrapper[4903]: I0128 16:08:23.199702 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.199666953 podStartE2EDuration="2.199666953s" podCreationTimestamp="2026-01-28 16:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:23.178092555 +0000 UTC m=+1375.454064066" watchObservedRunningTime="2026-01-28 16:08:23.199666953 +0000 UTC m=+1375.475638464" Jan 28 16:08:23 crc kubenswrapper[4903]: I0128 16:08:23.205378 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.205353668 podStartE2EDuration="2.205353668s" podCreationTimestamp="2026-01-28 16:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:23.19478925 +0000 UTC m=+1375.470760761" watchObservedRunningTime="2026-01-28 16:08:23.205353668 +0000 UTC m=+1375.481325179" Jan 28 16:08:26 crc kubenswrapper[4903]: I0128 16:08:26.785367 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 16:08:30 crc kubenswrapper[4903]: I0128 16:08:30.570332 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 16:08:31 crc kubenswrapper[4903]: I0128 16:08:31.625867 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:08:31 crc kubenswrapper[4903]: I0128 16:08:31.626507 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:08:31 crc kubenswrapper[4903]: I0128 16:08:31.785108 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 16:08:31 crc kubenswrapper[4903]: I0128 16:08:31.817938 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 16:08:32 crc kubenswrapper[4903]: I0128 16:08:32.270009 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 16:08:32 crc kubenswrapper[4903]: I0128 16:08:32.708737 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 16:08:32 crc kubenswrapper[4903]: I0128 16:08:32.709075 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 16:08:38 crc kubenswrapper[4903]: E0128 16:08:38.163524 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod309c5093_a146_41a6_b0da_f6da00d2bec8.slice/crio-conmon-b6c753060fc37429d6df0849adc55674f2f1d9fd058720bf0ad6a6bf1803a871.scope\": RecentStats: unable to find data in memory cache]" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.292777 4903 generic.go:334] "Generic (PLEG): container finished" podID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerID="b6c753060fc37429d6df0849adc55674f2f1d9fd058720bf0ad6a6bf1803a871" exitCode=137 Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.293479 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"309c5093-a146-41a6-b0da-f6da00d2bec8","Type":"ContainerDied","Data":"b6c753060fc37429d6df0849adc55674f2f1d9fd058720bf0ad6a6bf1803a871"} Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.293654 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"309c5093-a146-41a6-b0da-f6da00d2bec8","Type":"ContainerDied","Data":"d238c6ff8f96bb2e10a321ce400891fc531e6a16ec04647aafbe1857306558e3"} Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.293722 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d238c6ff8f96bb2e10a321ce400891fc531e6a16ec04647aafbe1857306558e3" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.295241 4903 generic.go:334] "Generic (PLEG): container finished" podID="776e78f3-8a98-48fb-b92a-a56ab4baa23e" containerID="b0255b57a675117144280410a3cc8cc8f9253a8332aa44feaff2f186b1d47758" exitCode=137 Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.295333 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"776e78f3-8a98-48fb-b92a-a56ab4baa23e","Type":"ContainerDied","Data":"b0255b57a675117144280410a3cc8cc8f9253a8332aa44feaff2f186b1d47758"} Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.295414 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"776e78f3-8a98-48fb-b92a-a56ab4baa23e","Type":"ContainerDied","Data":"3216fee2b93dbbe38760f108c9b2c660d35baca7dead00be55aabec8f1647005"} Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.295478 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3216fee2b93dbbe38760f108c9b2c660d35baca7dead00be55aabec8f1647005" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.334284 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.343616 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457295 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-config-data\") pod \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457339 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/309c5093-a146-41a6-b0da-f6da00d2bec8-logs\") pod \"309c5093-a146-41a6-b0da-f6da00d2bec8\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457410 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf69v\" (UniqueName: \"kubernetes.io/projected/309c5093-a146-41a6-b0da-f6da00d2bec8-kube-api-access-pf69v\") pod \"309c5093-a146-41a6-b0da-f6da00d2bec8\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457428 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-combined-ca-bundle\") pod \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457461 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-config-data\") pod \"309c5093-a146-41a6-b0da-f6da00d2bec8\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457483 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5v69\" (UniqueName: \"kubernetes.io/projected/776e78f3-8a98-48fb-b92a-a56ab4baa23e-kube-api-access-r5v69\") pod \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\" (UID: \"776e78f3-8a98-48fb-b92a-a56ab4baa23e\") " Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457585 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-combined-ca-bundle\") pod \"309c5093-a146-41a6-b0da-f6da00d2bec8\" (UID: \"309c5093-a146-41a6-b0da-f6da00d2bec8\") " Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.457828 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/309c5093-a146-41a6-b0da-f6da00d2bec8-logs" (OuterVolumeSpecName: "logs") pod "309c5093-a146-41a6-b0da-f6da00d2bec8" (UID: "309c5093-a146-41a6-b0da-f6da00d2bec8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.458444 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/309c5093-a146-41a6-b0da-f6da00d2bec8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.462596 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309c5093-a146-41a6-b0da-f6da00d2bec8-kube-api-access-pf69v" (OuterVolumeSpecName: "kube-api-access-pf69v") pod "309c5093-a146-41a6-b0da-f6da00d2bec8" (UID: "309c5093-a146-41a6-b0da-f6da00d2bec8"). InnerVolumeSpecName "kube-api-access-pf69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.464779 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776e78f3-8a98-48fb-b92a-a56ab4baa23e-kube-api-access-r5v69" (OuterVolumeSpecName: "kube-api-access-r5v69") pod "776e78f3-8a98-48fb-b92a-a56ab4baa23e" (UID: "776e78f3-8a98-48fb-b92a-a56ab4baa23e"). InnerVolumeSpecName "kube-api-access-r5v69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.486836 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "776e78f3-8a98-48fb-b92a-a56ab4baa23e" (UID: "776e78f3-8a98-48fb-b92a-a56ab4baa23e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.486897 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "309c5093-a146-41a6-b0da-f6da00d2bec8" (UID: "309c5093-a146-41a6-b0da-f6da00d2bec8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.489696 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-config-data" (OuterVolumeSpecName: "config-data") pod "776e78f3-8a98-48fb-b92a-a56ab4baa23e" (UID: "776e78f3-8a98-48fb-b92a-a56ab4baa23e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.490192 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-config-data" (OuterVolumeSpecName: "config-data") pod "309c5093-a146-41a6-b0da-f6da00d2bec8" (UID: "309c5093-a146-41a6-b0da-f6da00d2bec8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.561732 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf69v\" (UniqueName: \"kubernetes.io/projected/309c5093-a146-41a6-b0da-f6da00d2bec8-kube-api-access-pf69v\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.562042 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.562064 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.562077 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5v69\" (UniqueName: \"kubernetes.io/projected/776e78f3-8a98-48fb-b92a-a56ab4baa23e-kube-api-access-r5v69\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.562088 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/309c5093-a146-41a6-b0da-f6da00d2bec8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:38 crc kubenswrapper[4903]: I0128 16:08:38.562100 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776e78f3-8a98-48fb-b92a-a56ab4baa23e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.305573 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.305613 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.343615 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.371200 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.383580 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.394041 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.406034 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: E0128 16:08:39.423207 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-metadata" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.423305 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-metadata" Jan 28 16:08:39 crc kubenswrapper[4903]: E0128 16:08:39.423350 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776e78f3-8a98-48fb-b92a-a56ab4baa23e" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.423360 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="776e78f3-8a98-48fb-b92a-a56ab4baa23e" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 16:08:39 crc kubenswrapper[4903]: E0128 16:08:39.423388 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-log" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.423395 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-log" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.424276 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="776e78f3-8a98-48fb-b92a-a56ab4baa23e" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.424309 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-metadata" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.424326 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" containerName="nova-metadata-log" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.427227 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.432826 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.435532 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.447925 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.449287 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.450568 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.450972 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.450977 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.451664 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.471134 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.579976 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de3c0640-ef93-45f3-ad08-771d26117dfc-logs\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.580068 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-config-data\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.580623 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.580674 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.580711 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczfh\" (UniqueName: \"kubernetes.io/projected/de3c0640-ef93-45f3-ad08-771d26117dfc-kube-api-access-gczfh\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.580769 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.580816 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9ls\" (UniqueName: \"kubernetes.io/projected/1f8d7105-dc30-4ef6-b862-eb67eefd4026-kube-api-access-cs9ls\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.581091 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.581156 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.581263 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.682874 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczfh\" (UniqueName: \"kubernetes.io/projected/de3c0640-ef93-45f3-ad08-771d26117dfc-kube-api-access-gczfh\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.683197 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.683335 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9ls\" (UniqueName: \"kubernetes.io/projected/1f8d7105-dc30-4ef6-b862-eb67eefd4026-kube-api-access-cs9ls\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.683454 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.683590 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.683731 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.683857 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de3c0640-ef93-45f3-ad08-771d26117dfc-logs\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.683955 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-config-data\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.684896 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.685009 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.684361 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de3c0640-ef93-45f3-ad08-771d26117dfc-logs\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.695521 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.695609 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.695622 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.696155 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.700444 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.700624 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.700834 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-config-data\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.705933 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczfh\" (UniqueName: \"kubernetes.io/projected/de3c0640-ef93-45f3-ad08-771d26117dfc-kube-api-access-gczfh\") pod \"nova-metadata-0\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.705989 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9ls\" (UniqueName: \"kubernetes.io/projected/1f8d7105-dc30-4ef6-b862-eb67eefd4026-kube-api-access-cs9ls\") pod \"nova-cell1-novncproxy-0\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.776550 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:08:39 crc kubenswrapper[4903]: I0128 16:08:39.784667 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:40 crc kubenswrapper[4903]: I0128 16:08:40.316258 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:08:40 crc kubenswrapper[4903]: I0128 16:08:40.387643 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:08:40 crc kubenswrapper[4903]: W0128 16:08:40.394966 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde3c0640_ef93_45f3_ad08_771d26117dfc.slice/crio-95731cc63c60ea938cfe31ba928577650769865dd2ab16fbc7d11702b2c8648e WatchSource:0}: Error finding container 95731cc63c60ea938cfe31ba928577650769865dd2ab16fbc7d11702b2c8648e: Status 404 returned error can't find the container with id 95731cc63c60ea938cfe31ba928577650769865dd2ab16fbc7d11702b2c8648e Jan 28 16:08:40 crc kubenswrapper[4903]: I0128 16:08:40.424819 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309c5093-a146-41a6-b0da-f6da00d2bec8" path="/var/lib/kubelet/pods/309c5093-a146-41a6-b0da-f6da00d2bec8/volumes" Jan 28 16:08:40 crc kubenswrapper[4903]: I0128 16:08:40.425406 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="776e78f3-8a98-48fb-b92a-a56ab4baa23e" path="/var/lib/kubelet/pods/776e78f3-8a98-48fb-b92a-a56ab4baa23e/volumes" Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.340245 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de3c0640-ef93-45f3-ad08-771d26117dfc","Type":"ContainerStarted","Data":"8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6"} Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.341866 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de3c0640-ef93-45f3-ad08-771d26117dfc","Type":"ContainerStarted","Data":"a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75"} Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.341989 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de3c0640-ef93-45f3-ad08-771d26117dfc","Type":"ContainerStarted","Data":"95731cc63c60ea938cfe31ba928577650769865dd2ab16fbc7d11702b2c8648e"} Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.343755 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f8d7105-dc30-4ef6-b862-eb67eefd4026","Type":"ContainerStarted","Data":"8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e"} Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.343805 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f8d7105-dc30-4ef6-b862-eb67eefd4026","Type":"ContainerStarted","Data":"c9790dea5b32c1b6a9f9a411a0fa3cf1d686b63fc4fea92be53f8b53c2e57f69"} Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.362662 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.362641205 podStartE2EDuration="2.362641205s" podCreationTimestamp="2026-01-28 16:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:41.359075409 +0000 UTC m=+1393.635046930" watchObservedRunningTime="2026-01-28 16:08:41.362641205 +0000 UTC m=+1393.638612726" Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.393092 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.396409 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.396389566 podStartE2EDuration="2.396389566s" podCreationTimestamp="2026-01-28 16:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:41.382912539 +0000 UTC m=+1393.658884050" watchObservedRunningTime="2026-01-28 16:08:41.396389566 +0000 UTC m=+1393.672361087" Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.792379 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.793288 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.795761 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 16:08:41 crc kubenswrapper[4903]: I0128 16:08:41.799095 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.387312 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.390561 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.574897 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-zk982"] Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.577182 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.594909 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-zk982"] Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.748310 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-svc\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.748382 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbjt\" (UniqueName: \"kubernetes.io/projected/dad42813-08ad-4746-b488-af16a6504561-kube-api-access-jkbjt\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.748437 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.748586 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.748636 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-config\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.748694 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.849718 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.849928 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-svc\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.850050 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbjt\" (UniqueName: \"kubernetes.io/projected/dad42813-08ad-4746-b488-af16a6504561-kube-api-access-jkbjt\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.850439 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.850634 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-svc\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.850887 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.851203 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.851701 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.852401 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.852630 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-config\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.853219 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-config\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.870168 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbjt\" (UniqueName: \"kubernetes.io/projected/dad42813-08ad-4746-b488-af16a6504561-kube-api-access-jkbjt\") pod \"dnsmasq-dns-5ddd577785-zk982\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:42 crc kubenswrapper[4903]: I0128 16:08:42.905078 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:43 crc kubenswrapper[4903]: I0128 16:08:43.420787 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-zk982"] Jan 28 16:08:44 crc kubenswrapper[4903]: I0128 16:08:44.409373 4903 generic.go:334] "Generic (PLEG): container finished" podID="dad42813-08ad-4746-b488-af16a6504561" containerID="a67f772dccae3b47ab1f4d72830713aa1130fd35e60e57d62e1f436580945a77" exitCode=0 Jan 28 16:08:44 crc kubenswrapper[4903]: I0128 16:08:44.410102 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-zk982" event={"ID":"dad42813-08ad-4746-b488-af16a6504561","Type":"ContainerDied","Data":"a67f772dccae3b47ab1f4d72830713aa1130fd35e60e57d62e1f436580945a77"} Jan 28 16:08:44 crc kubenswrapper[4903]: I0128 16:08:44.411389 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-zk982" event={"ID":"dad42813-08ad-4746-b488-af16a6504561","Type":"ContainerStarted","Data":"d046ae0a0501f3b550adf715db850dd87f629f2ee82a870ede30ad87c4e9f9f6"} Jan 28 16:08:44 crc kubenswrapper[4903]: I0128 16:08:44.776830 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 16:08:44 crc kubenswrapper[4903]: I0128 16:08:44.776896 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 16:08:44 crc kubenswrapper[4903]: I0128 16:08:44.785465 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.422967 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-zk982" event={"ID":"dad42813-08ad-4746-b488-af16a6504561","Type":"ContainerStarted","Data":"ac5fa928a6299fa4da555a268ab5014fe09528230a48dee3048b346cb50eab23"} Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.423434 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.462633 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.462926 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-log" containerID="cri-o://93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f" gracePeriod=30 Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.463422 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-api" containerID="cri-o://4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409" gracePeriod=30 Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.473506 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-zk982" podStartSLOduration=3.473485286 podStartE2EDuration="3.473485286s" podCreationTimestamp="2026-01-28 16:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:45.459512235 +0000 UTC m=+1397.735483756" watchObservedRunningTime="2026-01-28 16:08:45.473485286 +0000 UTC m=+1397.749456797" Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.825971 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.826293 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-central-agent" containerID="cri-o://6208e65787fde5e4a197f4021077ab14af4e2cfe8f6c3dac084a147e070ddc73" gracePeriod=30 Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.826378 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="proxy-httpd" containerID="cri-o://ac49defcc977e6c260d4743e9000a1e960aad0f05447b83c8d3471c0be564349" gracePeriod=30 Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.826390 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-notification-agent" containerID="cri-o://5a24b6f133be8724bb507b4e07921d6f2881f6d7f964099e7ffc67db065083a0" gracePeriod=30 Jan 28 16:08:45 crc kubenswrapper[4903]: I0128 16:08:45.826368 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="sg-core" containerID="cri-o://9b72e9c8533bb484a5098b45f3eefd44f36db58f9766a8b98b45742025cd67d5" gracePeriod=30 Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.433907 4903 generic.go:334] "Generic (PLEG): container finished" podID="483753d5-378b-4dcf-a462-1fb273e851cc" containerID="ac49defcc977e6c260d4743e9000a1e960aad0f05447b83c8d3471c0be564349" exitCode=0 Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.433946 4903 generic.go:334] "Generic (PLEG): container finished" podID="483753d5-378b-4dcf-a462-1fb273e851cc" containerID="9b72e9c8533bb484a5098b45f3eefd44f36db58f9766a8b98b45742025cd67d5" exitCode=2 Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.433957 4903 generic.go:334] "Generic (PLEG): container finished" podID="483753d5-378b-4dcf-a462-1fb273e851cc" containerID="6208e65787fde5e4a197f4021077ab14af4e2cfe8f6c3dac084a147e070ddc73" exitCode=0 Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.434006 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerDied","Data":"ac49defcc977e6c260d4743e9000a1e960aad0f05447b83c8d3471c0be564349"} Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.434069 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerDied","Data":"9b72e9c8533bb484a5098b45f3eefd44f36db58f9766a8b98b45742025cd67d5"} Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.434083 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerDied","Data":"6208e65787fde5e4a197f4021077ab14af4e2cfe8f6c3dac084a147e070ddc73"} Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.436279 4903 generic.go:334] "Generic (PLEG): container finished" podID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerID="93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f" exitCode=143 Jan 28 16:08:46 crc kubenswrapper[4903]: I0128 16:08:46.436353 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9209cacf-cf97-4251-9d2b-e4279be66d79","Type":"ContainerDied","Data":"93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f"} Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.756431 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-szjp7"] Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.760537 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.789444 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szjp7"] Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.849844 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-utilities\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.849922 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-catalog-content\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.849988 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s478m\" (UniqueName: \"kubernetes.io/projected/35727cb3-f700-42e6-b472-6b84872e40af-kube-api-access-s478m\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.952088 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-utilities\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.952166 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-catalog-content\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.952218 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s478m\" (UniqueName: \"kubernetes.io/projected/35727cb3-f700-42e6-b472-6b84872e40af-kube-api-access-s478m\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.952672 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-utilities\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.952928 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-catalog-content\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:47 crc kubenswrapper[4903]: I0128 16:08:47.976297 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s478m\" (UniqueName: \"kubernetes.io/projected/35727cb3-f700-42e6-b472-6b84872e40af-kube-api-access-s478m\") pod \"redhat-operators-szjp7\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:48 crc kubenswrapper[4903]: I0128 16:08:48.080465 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:48 crc kubenswrapper[4903]: I0128 16:08:48.566490 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-szjp7"] Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.199089 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.382613 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-config-data\") pod \"9209cacf-cf97-4251-9d2b-e4279be66d79\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.382700 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9209cacf-cf97-4251-9d2b-e4279be66d79-logs\") pod \"9209cacf-cf97-4251-9d2b-e4279be66d79\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.382813 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-combined-ca-bundle\") pod \"9209cacf-cf97-4251-9d2b-e4279be66d79\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.382848 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4ztg\" (UniqueName: \"kubernetes.io/projected/9209cacf-cf97-4251-9d2b-e4279be66d79-kube-api-access-l4ztg\") pod \"9209cacf-cf97-4251-9d2b-e4279be66d79\" (UID: \"9209cacf-cf97-4251-9d2b-e4279be66d79\") " Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.401692 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9209cacf-cf97-4251-9d2b-e4279be66d79-logs" (OuterVolumeSpecName: "logs") pod "9209cacf-cf97-4251-9d2b-e4279be66d79" (UID: "9209cacf-cf97-4251-9d2b-e4279be66d79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.402413 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9209cacf-cf97-4251-9d2b-e4279be66d79-kube-api-access-l4ztg" (OuterVolumeSpecName: "kube-api-access-l4ztg") pod "9209cacf-cf97-4251-9d2b-e4279be66d79" (UID: "9209cacf-cf97-4251-9d2b-e4279be66d79"). InnerVolumeSpecName "kube-api-access-l4ztg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.478728 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-config-data" (OuterVolumeSpecName: "config-data") pod "9209cacf-cf97-4251-9d2b-e4279be66d79" (UID: "9209cacf-cf97-4251-9d2b-e4279be66d79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.481254 4903 generic.go:334] "Generic (PLEG): container finished" podID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerID="4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409" exitCode=0 Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.481335 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.481351 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9209cacf-cf97-4251-9d2b-e4279be66d79","Type":"ContainerDied","Data":"4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409"} Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.481414 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9209cacf-cf97-4251-9d2b-e4279be66d79","Type":"ContainerDied","Data":"e0a67e307efae6f84350426f87232c4a696cc45af6cd513a592cdd5f704e46ef"} Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.481441 4903 scope.go:117] "RemoveContainer" containerID="4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.483651 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9209cacf-cf97-4251-9d2b-e4279be66d79" (UID: "9209cacf-cf97-4251-9d2b-e4279be66d79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.487014 4903 generic.go:334] "Generic (PLEG): container finished" podID="35727cb3-f700-42e6-b472-6b84872e40af" containerID="5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05" exitCode=0 Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.487577 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szjp7" event={"ID":"35727cb3-f700-42e6-b472-6b84872e40af","Type":"ContainerDied","Data":"5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05"} Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.487611 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szjp7" event={"ID":"35727cb3-f700-42e6-b472-6b84872e40af","Type":"ContainerStarted","Data":"c6b774d1a5d59882caaca8d0fbfe7afbe7ddb86102b12f77d4a54b4fa87591a4"} Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.489810 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.489831 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9209cacf-cf97-4251-9d2b-e4279be66d79-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.489843 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9209cacf-cf97-4251-9d2b-e4279be66d79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.489853 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4ztg\" (UniqueName: \"kubernetes.io/projected/9209cacf-cf97-4251-9d2b-e4279be66d79-kube-api-access-l4ztg\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.493641 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.550284 4903 scope.go:117] "RemoveContainer" containerID="93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.585315 4903 scope.go:117] "RemoveContainer" containerID="4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409" Jan 28 16:08:49 crc kubenswrapper[4903]: E0128 16:08:49.585861 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409\": container with ID starting with 4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409 not found: ID does not exist" containerID="4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.585931 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409"} err="failed to get container status \"4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409\": rpc error: code = NotFound desc = could not find container \"4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409\": container with ID starting with 4dfcf36615691145d04d2050dbb0288696563a2cd45936df68797b92aec3b409 not found: ID does not exist" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.585980 4903 scope.go:117] "RemoveContainer" containerID="93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f" Jan 28 16:08:49 crc kubenswrapper[4903]: E0128 16:08:49.586323 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f\": container with ID starting with 93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f not found: ID does not exist" containerID="93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.586355 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f"} err="failed to get container status \"93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f\": rpc error: code = NotFound desc = could not find container \"93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f\": container with ID starting with 93c3b186ec773d3c4583dc88b9208dfd5cf2cc1ca7abaf5e578469e1bbd9b55f not found: ID does not exist" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.777464 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.777518 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.785855 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.804318 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.819841 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.830307 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.844462 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:49 crc kubenswrapper[4903]: E0128 16:08:49.845035 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-log" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.845061 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-log" Jan 28 16:08:49 crc kubenswrapper[4903]: E0128 16:08:49.845099 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-api" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.845109 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-api" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.845359 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-log" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.845390 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" containerName="nova-api-api" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.846737 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.849753 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.849789 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.853675 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.868721 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.998867 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-config-data\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.998961 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.999047 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.999068 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6p52\" (UniqueName: \"kubernetes.io/projected/56989c3c-0982-4534-9efa-7231440dad98-kube-api-access-b6p52\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.999154 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:49 crc kubenswrapper[4903]: I0128 16:08:49.999180 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56989c3c-0982-4534-9efa-7231440dad98-logs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.100259 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.100302 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6p52\" (UniqueName: \"kubernetes.io/projected/56989c3c-0982-4534-9efa-7231440dad98-kube-api-access-b6p52\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.100373 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.100398 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56989c3c-0982-4534-9efa-7231440dad98-logs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.100456 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-config-data\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.100503 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.101080 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56989c3c-0982-4534-9efa-7231440dad98-logs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.105074 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.105149 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-config-data\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.105793 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-internal-tls-certs\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.119295 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.123252 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6p52\" (UniqueName: \"kubernetes.io/projected/56989c3c-0982-4534-9efa-7231440dad98-kube-api-access-b6p52\") pod \"nova-api-0\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.163321 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.424451 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9209cacf-cf97-4251-9d2b-e4279be66d79" path="/var/lib/kubelet/pods/9209cacf-cf97-4251-9d2b-e4279be66d79/volumes" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.525926 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.661718 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.758577 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-z8mw6"] Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.760222 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.762896 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.763089 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.771165 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z8mw6"] Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.793786 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.793860 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.915866 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshwc\" (UniqueName: \"kubernetes.io/projected/a966d5dc-c13b-4925-bc59-64f40ee7f334-kube-api-access-jshwc\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.915921 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.916071 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-config-data\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:50 crc kubenswrapper[4903]: I0128 16:08:50.916285 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-scripts\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.018172 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshwc\" (UniqueName: \"kubernetes.io/projected/a966d5dc-c13b-4925-bc59-64f40ee7f334-kube-api-access-jshwc\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.018243 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.018301 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-config-data\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.018385 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-scripts\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.023324 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-config-data\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.023327 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.026285 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-scripts\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.040063 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshwc\" (UniqueName: \"kubernetes.io/projected/a966d5dc-c13b-4925-bc59-64f40ee7f334-kube-api-access-jshwc\") pod \"nova-cell1-cell-mapping-z8mw6\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.097834 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.532033 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szjp7" event={"ID":"35727cb3-f700-42e6-b472-6b84872e40af","Type":"ContainerStarted","Data":"2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5"} Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.547867 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56989c3c-0982-4534-9efa-7231440dad98","Type":"ContainerStarted","Data":"a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f"} Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.547906 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56989c3c-0982-4534-9efa-7231440dad98","Type":"ContainerStarted","Data":"c9c212f4bf89c94a8bad869209a20040ae8687b487044867f878994e815f8d6e"} Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.575219 4903 generic.go:334] "Generic (PLEG): container finished" podID="483753d5-378b-4dcf-a462-1fb273e851cc" containerID="5a24b6f133be8724bb507b4e07921d6f2881f6d7f964099e7ffc67db065083a0" exitCode=0 Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.576213 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerDied","Data":"5a24b6f133be8724bb507b4e07921d6f2881f6d7f964099e7ffc67db065083a0"} Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.576247 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"483753d5-378b-4dcf-a462-1fb273e851cc","Type":"ContainerDied","Data":"c0014e065823cdde8cb68918533e5ac2bb439791bdd40b915fed5e58ab5093c8"} Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.576257 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0014e065823cdde8cb68918533e5ac2bb439791bdd40b915fed5e58ab5093c8" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.714465 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.838314 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-log-httpd\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.838758 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-combined-ca-bundle\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.838790 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-config-data\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.838849 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-run-httpd\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.838956 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-ceilometer-tls-certs\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.839028 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vlbs\" (UniqueName: \"kubernetes.io/projected/483753d5-378b-4dcf-a462-1fb273e851cc-kube-api-access-4vlbs\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.839046 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-sg-core-conf-yaml\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.839098 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-scripts\") pod \"483753d5-378b-4dcf-a462-1fb273e851cc\" (UID: \"483753d5-378b-4dcf-a462-1fb273e851cc\") " Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.840221 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.840596 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.848925 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483753d5-378b-4dcf-a462-1fb273e851cc-kube-api-access-4vlbs" (OuterVolumeSpecName: "kube-api-access-4vlbs") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "kube-api-access-4vlbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.851569 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-scripts" (OuterVolumeSpecName: "scripts") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.888764 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.930663 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.943499 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.943541 4903 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.943551 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vlbs\" (UniqueName: \"kubernetes.io/projected/483753d5-378b-4dcf-a462-1fb273e851cc-kube-api-access-4vlbs\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.943561 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.943568 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.943577 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/483753d5-378b-4dcf-a462-1fb273e851cc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.946420 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.992675 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-config-data" (OuterVolumeSpecName: "config-data") pod "483753d5-378b-4dcf-a462-1fb273e851cc" (UID: "483753d5-378b-4dcf-a462-1fb273e851cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4903]: I0128 16:08:51.997294 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z8mw6"] Jan 28 16:08:52 crc kubenswrapper[4903]: W0128 16:08:52.000676 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda966d5dc_c13b_4925_bc59_64f40ee7f334.slice/crio-e1c09970c3ddd9fe7fd604bc48c54d1f2551a82c25b4e9591804af14081f39f7 WatchSource:0}: Error finding container e1c09970c3ddd9fe7fd604bc48c54d1f2551a82c25b4e9591804af14081f39f7: Status 404 returned error can't find the container with id e1c09970c3ddd9fe7fd604bc48c54d1f2551a82c25b4e9591804af14081f39f7 Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.046928 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.046963 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/483753d5-378b-4dcf-a462-1fb273e851cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.587262 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56989c3c-0982-4534-9efa-7231440dad98","Type":"ContainerStarted","Data":"d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c"} Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.590891 4903 generic.go:334] "Generic (PLEG): container finished" podID="35727cb3-f700-42e6-b472-6b84872e40af" containerID="2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5" exitCode=0 Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.590942 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szjp7" event={"ID":"35727cb3-f700-42e6-b472-6b84872e40af","Type":"ContainerDied","Data":"2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5"} Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.592620 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.592658 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z8mw6" event={"ID":"a966d5dc-c13b-4925-bc59-64f40ee7f334","Type":"ContainerStarted","Data":"e3e71f83d63ae6618fa225dc48da3f3defa052af3751ca0db1ffac97bca25831"} Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.592674 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z8mw6" event={"ID":"a966d5dc-c13b-4925-bc59-64f40ee7f334","Type":"ContainerStarted","Data":"e1c09970c3ddd9fe7fd604bc48c54d1f2551a82c25b4e9591804af14081f39f7"} Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.611660 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.61163983 podStartE2EDuration="3.61163983s" podCreationTimestamp="2026-01-28 16:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:52.606802558 +0000 UTC m=+1404.882774079" watchObservedRunningTime="2026-01-28 16:08:52.61163983 +0000 UTC m=+1404.887611341" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.631144 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-z8mw6" podStartSLOduration=2.631125491 podStartE2EDuration="2.631125491s" podCreationTimestamp="2026-01-28 16:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:08:52.626890835 +0000 UTC m=+1404.902862356" watchObservedRunningTime="2026-01-28 16:08:52.631125491 +0000 UTC m=+1404.907097002" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.658656 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.681374 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.690766 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:52 crc kubenswrapper[4903]: E0128 16:08:52.691270 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="proxy-httpd" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691291 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="proxy-httpd" Jan 28 16:08:52 crc kubenswrapper[4903]: E0128 16:08:52.691339 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-notification-agent" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691346 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-notification-agent" Jan 28 16:08:52 crc kubenswrapper[4903]: E0128 16:08:52.691361 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-central-agent" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691367 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-central-agent" Jan 28 16:08:52 crc kubenswrapper[4903]: E0128 16:08:52.691385 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="sg-core" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691390 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="sg-core" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691576 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-notification-agent" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691595 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="sg-core" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691605 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="proxy-httpd" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.691621 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" containerName="ceilometer-central-agent" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.693371 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.700027 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.700266 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.700359 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.702853 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.861082 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.861142 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-log-httpd\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.861850 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-config-data\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.861895 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-scripts\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.861934 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-run-httpd\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.861963 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.862055 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.862083 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmds\" (UniqueName: \"kubernetes.io/projected/07a65ed0-8012-4a4a-b973-8b1fcdafef52-kube-api-access-jsmds\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.906757 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.963916 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-config-data\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.963960 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-scripts\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.963988 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-run-httpd\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.964011 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.964085 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.964110 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmds\" (UniqueName: \"kubernetes.io/projected/07a65ed0-8012-4a4a-b973-8b1fcdafef52-kube-api-access-jsmds\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.964141 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.964162 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-log-httpd\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.964650 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-log-httpd\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.970227 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-run-httpd\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.975965 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.976228 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.984316 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.994968 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-scripts\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:52 crc kubenswrapper[4903]: I0128 16:08:52.997660 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-config-data\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.001009 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmds\" (UniqueName: \"kubernetes.io/projected/07a65ed0-8012-4a4a-b973-8b1fcdafef52-kube-api-access-jsmds\") pod \"ceilometer-0\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " pod="openstack/ceilometer-0" Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.015595 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.047332 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-85fkw"] Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.047597 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" podUID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerName="dnsmasq-dns" containerID="cri-o://6d6d1771a09a377962155d33bf5389253d5717c79cf93cba1a218f8eb08c3def" gracePeriod=10 Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.604807 4903 generic.go:334] "Generic (PLEG): container finished" podID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerID="6d6d1771a09a377962155d33bf5389253d5717c79cf93cba1a218f8eb08c3def" exitCode=0 Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.605134 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" event={"ID":"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd","Type":"ContainerDied","Data":"6d6d1771a09a377962155d33bf5389253d5717c79cf93cba1a218f8eb08c3def"} Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.618804 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szjp7" event={"ID":"35727cb3-f700-42e6-b472-6b84872e40af","Type":"ContainerStarted","Data":"febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9"} Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.624823 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:08:53 crc kubenswrapper[4903]: I0128 16:08:53.654019 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-szjp7" podStartSLOduration=3.761398146 podStartE2EDuration="6.653996894s" podCreationTimestamp="2026-01-28 16:08:47 +0000 UTC" firstStartedPulling="2026-01-28 16:08:49.493407296 +0000 UTC m=+1401.769378807" lastFinishedPulling="2026-01-28 16:08:52.386006044 +0000 UTC m=+1404.661977555" observedRunningTime="2026-01-28 16:08:53.646190211 +0000 UTC m=+1405.922161722" watchObservedRunningTime="2026-01-28 16:08:53.653996894 +0000 UTC m=+1405.929968425" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.049391 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.197306 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hs9d\" (UniqueName: \"kubernetes.io/projected/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-kube-api-access-2hs9d\") pod \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.197346 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-nb\") pod \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.197491 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-config\") pod \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.197563 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-swift-storage-0\") pod \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.197614 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-sb\") pod \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.197647 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-svc\") pod \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\" (UID: \"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd\") " Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.211049 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-kube-api-access-2hs9d" (OuterVolumeSpecName: "kube-api-access-2hs9d") pod "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" (UID: "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd"). InnerVolumeSpecName "kube-api-access-2hs9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.264278 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-config" (OuterVolumeSpecName: "config") pod "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" (UID: "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.266591 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" (UID: "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.270388 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" (UID: "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.271140 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" (UID: "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.296324 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" (UID: "f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.313483 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.313523 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.313555 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.313566 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hs9d\" (UniqueName: \"kubernetes.io/projected/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-kube-api-access-2hs9d\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.313580 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.313590 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.425066 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483753d5-378b-4dcf-a462-1fb273e851cc" path="/var/lib/kubelet/pods/483753d5-378b-4dcf-a462-1fb273e851cc/volumes" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.631993 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.631960 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-85fkw" event={"ID":"f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd","Type":"ContainerDied","Data":"9392e2d2e84df73b72b11a01e286c57679919ed0b66076115429fef4b16ee0b9"} Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.632581 4903 scope.go:117] "RemoveContainer" containerID="6d6d1771a09a377962155d33bf5389253d5717c79cf93cba1a218f8eb08c3def" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.633277 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerStarted","Data":"7834db687f4c4abcfb882be9e49644d6d743600f2ea5ff2f01a5f1dbde3c0e9f"} Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.649916 4903 scope.go:117] "RemoveContainer" containerID="459568d7715070f82d1d07692a88852d708864004ce8865d8c244320b036fa82" Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.667993 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-85fkw"] Jan 28 16:08:54 crc kubenswrapper[4903]: I0128 16:08:54.689200 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-85fkw"] Jan 28 16:08:55 crc kubenswrapper[4903]: I0128 16:08:55.642545 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerStarted","Data":"843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e"} Jan 28 16:08:55 crc kubenswrapper[4903]: I0128 16:08:55.642797 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerStarted","Data":"4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1"} Jan 28 16:08:56 crc kubenswrapper[4903]: I0128 16:08:56.435584 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" path="/var/lib/kubelet/pods/f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd/volumes" Jan 28 16:08:56 crc kubenswrapper[4903]: I0128 16:08:56.654308 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerStarted","Data":"f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978"} Jan 28 16:08:58 crc kubenswrapper[4903]: I0128 16:08:58.081245 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:58 crc kubenswrapper[4903]: I0128 16:08:58.081647 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:08:58 crc kubenswrapper[4903]: I0128 16:08:58.678939 4903 generic.go:334] "Generic (PLEG): container finished" podID="a966d5dc-c13b-4925-bc59-64f40ee7f334" containerID="e3e71f83d63ae6618fa225dc48da3f3defa052af3751ca0db1ffac97bca25831" exitCode=0 Jan 28 16:08:58 crc kubenswrapper[4903]: I0128 16:08:58.679043 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z8mw6" event={"ID":"a966d5dc-c13b-4925-bc59-64f40ee7f334","Type":"ContainerDied","Data":"e3e71f83d63ae6618fa225dc48da3f3defa052af3751ca0db1ffac97bca25831"} Jan 28 16:08:59 crc kubenswrapper[4903]: I0128 16:08:59.158194 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-szjp7" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="registry-server" probeResult="failure" output=< Jan 28 16:08:59 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 16:08:59 crc kubenswrapper[4903]: > Jan 28 16:08:59 crc kubenswrapper[4903]: I0128 16:08:59.691517 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerStarted","Data":"245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77"} Jan 28 16:08:59 crc kubenswrapper[4903]: I0128 16:08:59.691706 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 16:08:59 crc kubenswrapper[4903]: I0128 16:08:59.722487 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.865733288 podStartE2EDuration="7.722473037s" podCreationTimestamp="2026-01-28 16:08:52 +0000 UTC" firstStartedPulling="2026-01-28 16:08:53.636488086 +0000 UTC m=+1405.912459597" lastFinishedPulling="2026-01-28 16:08:58.493227825 +0000 UTC m=+1410.769199346" observedRunningTime="2026-01-28 16:08:59.720798192 +0000 UTC m=+1411.996769703" watchObservedRunningTime="2026-01-28 16:08:59.722473037 +0000 UTC m=+1411.998444548" Jan 28 16:08:59 crc kubenswrapper[4903]: I0128 16:08:59.784687 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 16:08:59 crc kubenswrapper[4903]: I0128 16:08:59.788323 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 16:08:59 crc kubenswrapper[4903]: I0128 16:08:59.846659 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.112636 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.164065 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.164513 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.236295 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-combined-ca-bundle\") pod \"a966d5dc-c13b-4925-bc59-64f40ee7f334\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.236393 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-scripts\") pod \"a966d5dc-c13b-4925-bc59-64f40ee7f334\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.236445 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-config-data\") pod \"a966d5dc-c13b-4925-bc59-64f40ee7f334\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.236585 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jshwc\" (UniqueName: \"kubernetes.io/projected/a966d5dc-c13b-4925-bc59-64f40ee7f334-kube-api-access-jshwc\") pod \"a966d5dc-c13b-4925-bc59-64f40ee7f334\" (UID: \"a966d5dc-c13b-4925-bc59-64f40ee7f334\") " Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.242387 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a966d5dc-c13b-4925-bc59-64f40ee7f334-kube-api-access-jshwc" (OuterVolumeSpecName: "kube-api-access-jshwc") pod "a966d5dc-c13b-4925-bc59-64f40ee7f334" (UID: "a966d5dc-c13b-4925-bc59-64f40ee7f334"). InnerVolumeSpecName "kube-api-access-jshwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.250059 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-scripts" (OuterVolumeSpecName: "scripts") pod "a966d5dc-c13b-4925-bc59-64f40ee7f334" (UID: "a966d5dc-c13b-4925-bc59-64f40ee7f334"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.265943 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a966d5dc-c13b-4925-bc59-64f40ee7f334" (UID: "a966d5dc-c13b-4925-bc59-64f40ee7f334"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.271083 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-config-data" (OuterVolumeSpecName: "config-data") pod "a966d5dc-c13b-4925-bc59-64f40ee7f334" (UID: "a966d5dc-c13b-4925-bc59-64f40ee7f334"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.340112 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.340333 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.340383 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a966d5dc-c13b-4925-bc59-64f40ee7f334-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.340399 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jshwc\" (UniqueName: \"kubernetes.io/projected/a966d5dc-c13b-4925-bc59-64f40ee7f334-kube-api-access-jshwc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.703009 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z8mw6" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.703211 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z8mw6" event={"ID":"a966d5dc-c13b-4925-bc59-64f40ee7f334","Type":"ContainerDied","Data":"e1c09970c3ddd9fe7fd604bc48c54d1f2551a82c25b4e9591804af14081f39f7"} Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.703351 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1c09970c3ddd9fe7fd604bc48c54d1f2551a82c25b4e9591804af14081f39f7" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.718315 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.909406 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.926721 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.926941 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d5dad84c-f09f-4430-90cc-febd017d6f72" containerName="nova-scheduler-scheduler" containerID="cri-o://3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6" gracePeriod=30 Jan 28 16:09:00 crc kubenswrapper[4903]: I0128 16:09:00.955747 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:01 crc kubenswrapper[4903]: I0128 16:09:01.175680 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 16:09:01 crc kubenswrapper[4903]: I0128 16:09:01.175724 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 16:09:01 crc kubenswrapper[4903]: I0128 16:09:01.711153 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-log" containerID="cri-o://a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f" gracePeriod=30 Jan 28 16:09:01 crc kubenswrapper[4903]: I0128 16:09:01.711272 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-api" containerID="cri-o://d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c" gracePeriod=30 Jan 28 16:09:01 crc kubenswrapper[4903]: E0128 16:09:01.786620 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:09:01 crc kubenswrapper[4903]: E0128 16:09:01.798998 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:09:01 crc kubenswrapper[4903]: E0128 16:09:01.801581 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:09:01 crc kubenswrapper[4903]: E0128 16:09:01.801659 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d5dad84c-f09f-4430-90cc-febd017d6f72" containerName="nova-scheduler-scheduler" Jan 28 16:09:02 crc kubenswrapper[4903]: I0128 16:09:02.721466 4903 generic.go:334] "Generic (PLEG): container finished" podID="56989c3c-0982-4534-9efa-7231440dad98" containerID="a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f" exitCode=143 Jan 28 16:09:02 crc kubenswrapper[4903]: I0128 16:09:02.721924 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-log" containerID="cri-o://a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75" gracePeriod=30 Jan 28 16:09:02 crc kubenswrapper[4903]: I0128 16:09:02.722120 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56989c3c-0982-4534-9efa-7231440dad98","Type":"ContainerDied","Data":"a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f"} Jan 28 16:09:02 crc kubenswrapper[4903]: I0128 16:09:02.722395 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-metadata" containerID="cri-o://8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6" gracePeriod=30 Jan 28 16:09:03 crc kubenswrapper[4903]: I0128 16:09:03.734415 4903 generic.go:334] "Generic (PLEG): container finished" podID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerID="a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75" exitCode=143 Jan 28 16:09:03 crc kubenswrapper[4903]: I0128 16:09:03.734492 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de3c0640-ef93-45f3-ad08-771d26117dfc","Type":"ContainerDied","Data":"a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75"} Jan 28 16:09:05 crc kubenswrapper[4903]: I0128 16:09:05.769023 4903 generic.go:334] "Generic (PLEG): container finished" podID="d5dad84c-f09f-4430-90cc-febd017d6f72" containerID="3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6" exitCode=0 Jan 28 16:09:05 crc kubenswrapper[4903]: I0128 16:09:05.769386 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5dad84c-f09f-4430-90cc-febd017d6f72","Type":"ContainerDied","Data":"3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6"} Jan 28 16:09:05 crc kubenswrapper[4903]: I0128 16:09:05.909124 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:55726->10.217.0.193:8775: read: connection reset by peer" Jan 28 16:09:05 crc kubenswrapper[4903]: I0128 16:09:05.909187 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:55734->10.217.0.193:8775: read: connection reset by peer" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.344270 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.354273 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.460155 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-combined-ca-bundle\") pod \"de3c0640-ef93-45f3-ad08-771d26117dfc\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.461826 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-nova-metadata-tls-certs\") pod \"de3c0640-ef93-45f3-ad08-771d26117dfc\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.461984 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de3c0640-ef93-45f3-ad08-771d26117dfc-logs\") pod \"de3c0640-ef93-45f3-ad08-771d26117dfc\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.462024 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwglm\" (UniqueName: \"kubernetes.io/projected/d5dad84c-f09f-4430-90cc-febd017d6f72-kube-api-access-fwglm\") pod \"d5dad84c-f09f-4430-90cc-febd017d6f72\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.462122 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-config-data\") pod \"de3c0640-ef93-45f3-ad08-771d26117dfc\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.462164 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczfh\" (UniqueName: \"kubernetes.io/projected/de3c0640-ef93-45f3-ad08-771d26117dfc-kube-api-access-gczfh\") pod \"de3c0640-ef93-45f3-ad08-771d26117dfc\" (UID: \"de3c0640-ef93-45f3-ad08-771d26117dfc\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.462190 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-combined-ca-bundle\") pod \"d5dad84c-f09f-4430-90cc-febd017d6f72\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.462304 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-config-data\") pod \"d5dad84c-f09f-4430-90cc-febd017d6f72\" (UID: \"d5dad84c-f09f-4430-90cc-febd017d6f72\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.462692 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3c0640-ef93-45f3-ad08-771d26117dfc-logs" (OuterVolumeSpecName: "logs") pod "de3c0640-ef93-45f3-ad08-771d26117dfc" (UID: "de3c0640-ef93-45f3-ad08-771d26117dfc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.463121 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de3c0640-ef93-45f3-ad08-771d26117dfc-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.470924 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5dad84c-f09f-4430-90cc-febd017d6f72-kube-api-access-fwglm" (OuterVolumeSpecName: "kube-api-access-fwglm") pod "d5dad84c-f09f-4430-90cc-febd017d6f72" (UID: "d5dad84c-f09f-4430-90cc-febd017d6f72"). InnerVolumeSpecName "kube-api-access-fwglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.477673 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3c0640-ef93-45f3-ad08-771d26117dfc-kube-api-access-gczfh" (OuterVolumeSpecName: "kube-api-access-gczfh") pod "de3c0640-ef93-45f3-ad08-771d26117dfc" (UID: "de3c0640-ef93-45f3-ad08-771d26117dfc"). InnerVolumeSpecName "kube-api-access-gczfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.500480 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-config-data" (OuterVolumeSpecName: "config-data") pod "de3c0640-ef93-45f3-ad08-771d26117dfc" (UID: "de3c0640-ef93-45f3-ad08-771d26117dfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.514743 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-config-data" (OuterVolumeSpecName: "config-data") pod "d5dad84c-f09f-4430-90cc-febd017d6f72" (UID: "d5dad84c-f09f-4430-90cc-febd017d6f72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.516046 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de3c0640-ef93-45f3-ad08-771d26117dfc" (UID: "de3c0640-ef93-45f3-ad08-771d26117dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.536977 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5dad84c-f09f-4430-90cc-febd017d6f72" (UID: "d5dad84c-f09f-4430-90cc-febd017d6f72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.537091 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "de3c0640-ef93-45f3-ad08-771d26117dfc" (UID: "de3c0640-ef93-45f3-ad08-771d26117dfc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.565970 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.566012 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.566024 4903 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.566038 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwglm\" (UniqueName: \"kubernetes.io/projected/d5dad84c-f09f-4430-90cc-febd017d6f72-kube-api-access-fwglm\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.566050 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de3c0640-ef93-45f3-ad08-771d26117dfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.566061 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczfh\" (UniqueName: \"kubernetes.io/projected/de3c0640-ef93-45f3-ad08-771d26117dfc-kube-api-access-gczfh\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.566072 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dad84c-f09f-4430-90cc-febd017d6f72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.717756 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.782144 4903 generic.go:334] "Generic (PLEG): container finished" podID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerID="8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6" exitCode=0 Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.782203 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.782220 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de3c0640-ef93-45f3-ad08-771d26117dfc","Type":"ContainerDied","Data":"8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6"} Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.782251 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de3c0640-ef93-45f3-ad08-771d26117dfc","Type":"ContainerDied","Data":"95731cc63c60ea938cfe31ba928577650769865dd2ab16fbc7d11702b2c8648e"} Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.782273 4903 scope.go:117] "RemoveContainer" containerID="8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.786421 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.786438 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5dad84c-f09f-4430-90cc-febd017d6f72","Type":"ContainerDied","Data":"c440295c7bf20bb14265267534d85f30df200a6e6a65fc8d1e10c49f59656021"} Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.797636 4903 generic.go:334] "Generic (PLEG): container finished" podID="56989c3c-0982-4534-9efa-7231440dad98" containerID="d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c" exitCode=0 Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.797692 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.797706 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56989c3c-0982-4534-9efa-7231440dad98","Type":"ContainerDied","Data":"d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c"} Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.797747 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"56989c3c-0982-4534-9efa-7231440dad98","Type":"ContainerDied","Data":"c9c212f4bf89c94a8bad869209a20040ae8687b487044867f878994e815f8d6e"} Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.845430 4903 scope.go:117] "RemoveContainer" containerID="a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.857860 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.868655 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.872541 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56989c3c-0982-4534-9efa-7231440dad98-logs\") pod \"56989c3c-0982-4534-9efa-7231440dad98\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.872586 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs\") pod \"56989c3c-0982-4534-9efa-7231440dad98\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.872611 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6p52\" (UniqueName: \"kubernetes.io/projected/56989c3c-0982-4534-9efa-7231440dad98-kube-api-access-b6p52\") pod \"56989c3c-0982-4534-9efa-7231440dad98\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.872659 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-combined-ca-bundle\") pod \"56989c3c-0982-4534-9efa-7231440dad98\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.872701 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-internal-tls-certs\") pod \"56989c3c-0982-4534-9efa-7231440dad98\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.872746 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-config-data\") pod \"56989c3c-0982-4534-9efa-7231440dad98\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.882742 4903 scope.go:117] "RemoveContainer" containerID="8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.882890 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.891422 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.892970 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6\": container with ID starting with 8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6 not found: ID does not exist" containerID="8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.893018 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6"} err="failed to get container status \"8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6\": rpc error: code = NotFound desc = could not find container \"8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6\": container with ID starting with 8d113e741534f86418820ecc4e9a11b005b396473edf94c199d3a62f78b776d6 not found: ID does not exist" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.893047 4903 scope.go:117] "RemoveContainer" containerID="a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.894191 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56989c3c-0982-4534-9efa-7231440dad98-logs" (OuterVolumeSpecName: "logs") pod "56989c3c-0982-4534-9efa-7231440dad98" (UID: "56989c3c-0982-4534-9efa-7231440dad98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.894274 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75\": container with ID starting with a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75 not found: ID does not exist" containerID="a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.894299 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75"} err="failed to get container status \"a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75\": rpc error: code = NotFound desc = could not find container \"a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75\": container with ID starting with a8afb643445647d276e9a9d6fafe859d791a04231f0b6a79c37efcba52c82d75 not found: ID does not exist" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.894324 4903 scope.go:117] "RemoveContainer" containerID="3e7e129b212060daebe0c2797dabf076c362bec891ff6011fd85df0f45c3a3a6" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.899719 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56989c3c-0982-4534-9efa-7231440dad98-kube-api-access-b6p52" (OuterVolumeSpecName: "kube-api-access-b6p52") pod "56989c3c-0982-4534-9efa-7231440dad98" (UID: "56989c3c-0982-4534-9efa-7231440dad98"). InnerVolumeSpecName "kube-api-access-b6p52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.903340 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.903847 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a966d5dc-c13b-4925-bc59-64f40ee7f334" containerName="nova-manage" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.903864 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a966d5dc-c13b-4925-bc59-64f40ee7f334" containerName="nova-manage" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.903878 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-log" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.903886 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-log" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.903904 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerName="init" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.903913 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerName="init" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.903931 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dad84c-f09f-4430-90cc-febd017d6f72" containerName="nova-scheduler-scheduler" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.903940 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dad84c-f09f-4430-90cc-febd017d6f72" containerName="nova-scheduler-scheduler" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.903961 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerName="dnsmasq-dns" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.903967 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerName="dnsmasq-dns" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.903994 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-log" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904003 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-log" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.904016 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-metadata" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904024 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-metadata" Jan 28 16:09:06 crc kubenswrapper[4903]: E0128 16:09:06.904034 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-api" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904043 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-api" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904264 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a966d5dc-c13b-4925-bc59-64f40ee7f334" containerName="nova-manage" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904281 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-api" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904294 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-log" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904308 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" containerName="nova-metadata-metadata" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904319 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12f4a8c-0bb0-464c-a8ca-d1d98db2bdfd" containerName="dnsmasq-dns" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904337 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="56989c3c-0982-4534-9efa-7231440dad98" containerName="nova-api-log" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.904349 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5dad84c-f09f-4430-90cc-febd017d6f72" containerName="nova-scheduler-scheduler" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.905096 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.907448 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.912009 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-config-data" (OuterVolumeSpecName: "config-data") pod "56989c3c-0982-4534-9efa-7231440dad98" (UID: "56989c3c-0982-4534-9efa-7231440dad98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.917638 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.950574 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56989c3c-0982-4534-9efa-7231440dad98" (UID: "56989c3c-0982-4534-9efa-7231440dad98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.965710 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.967781 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.970242 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.970327 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.973727 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "56989c3c-0982-4534-9efa-7231440dad98" (UID: "56989c3c-0982-4534-9efa-7231440dad98"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.974292 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs\") pod \"56989c3c-0982-4534-9efa-7231440dad98\" (UID: \"56989c3c-0982-4534-9efa-7231440dad98\") " Jan 28 16:09:06 crc kubenswrapper[4903]: W0128 16:09:06.977279 4903 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/56989c3c-0982-4534-9efa-7231440dad98/volumes/kubernetes.io~secret/public-tls-certs Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.977306 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "56989c3c-0982-4534-9efa-7231440dad98" (UID: "56989c3c-0982-4534-9efa-7231440dad98"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.979522 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-config-data\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.979762 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.979863 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-config-data\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.979941 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.980055 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4f5f43-7fbc-41d1-935d-b0844db162a7-logs\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.981854 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.982012 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7524\" (UniqueName: \"kubernetes.io/projected/7f4f5f43-7fbc-41d1-935d-b0844db162a7-kube-api-access-b7524\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.984299 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvd7r\" (UniqueName: \"kubernetes.io/projected/9ef215ce-85eb-4148-848a-aeb5a15e343e-kube-api-access-kvd7r\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.984562 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.984580 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56989c3c-0982-4534-9efa-7231440dad98-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.984601 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.984613 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6p52\" (UniqueName: \"kubernetes.io/projected/56989c3c-0982-4534-9efa-7231440dad98-kube-api-access-b6p52\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:06 crc kubenswrapper[4903]: I0128 16:09:06.984625 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.029106 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "56989c3c-0982-4534-9efa-7231440dad98" (UID: "56989c3c-0982-4534-9efa-7231440dad98"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.035998 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.085747 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.085839 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7524\" (UniqueName: \"kubernetes.io/projected/7f4f5f43-7fbc-41d1-935d-b0844db162a7-kube-api-access-b7524\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.085867 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvd7r\" (UniqueName: \"kubernetes.io/projected/9ef215ce-85eb-4148-848a-aeb5a15e343e-kube-api-access-kvd7r\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.085907 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-config-data\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.085937 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.085967 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-config-data\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.085992 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.086028 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4f5f43-7fbc-41d1-935d-b0844db162a7-logs\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.086691 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56989c3c-0982-4534-9efa-7231440dad98-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.088257 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4f5f43-7fbc-41d1-935d-b0844db162a7-logs\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.091368 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.093284 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-config-data\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.093461 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-config-data\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.095288 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.099115 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.104461 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7524\" (UniqueName: \"kubernetes.io/projected/7f4f5f43-7fbc-41d1-935d-b0844db162a7-kube-api-access-b7524\") pod \"nova-metadata-0\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.107118 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvd7r\" (UniqueName: \"kubernetes.io/projected/9ef215ce-85eb-4148-848a-aeb5a15e343e-kube-api-access-kvd7r\") pod \"nova-scheduler-0\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " pod="openstack/nova-scheduler-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.115818 4903 scope.go:117] "RemoveContainer" containerID="d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.116391 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.144230 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.152939 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.161860 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.163677 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.170314 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.170600 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.170914 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.173158 4903 scope.go:117] "RemoveContainer" containerID="a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.177444 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.191402 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.191468 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.191574 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.191650 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-logs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.191727 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-config-data\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.191753 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6b6s\" (UniqueName: \"kubernetes.io/projected/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-kube-api-access-l6b6s\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.213402 4903 scope.go:117] "RemoveContainer" containerID="d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c" Jan 28 16:09:07 crc kubenswrapper[4903]: E0128 16:09:07.214307 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c\": container with ID starting with d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c not found: ID does not exist" containerID="d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.214413 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c"} err="failed to get container status \"d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c\": rpc error: code = NotFound desc = could not find container \"d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c\": container with ID starting with d177d7107c976c445111da41ae93206264706f247ef6231079d91b12f2edd18c not found: ID does not exist" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.214443 4903 scope.go:117] "RemoveContainer" containerID="a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f" Jan 28 16:09:07 crc kubenswrapper[4903]: E0128 16:09:07.214891 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f\": container with ID starting with a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f not found: ID does not exist" containerID="a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.214912 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f"} err="failed to get container status \"a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f\": rpc error: code = NotFound desc = could not find container \"a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f\": container with ID starting with a3a2686790723eb933e7b418b07cdf68922236b8485e94e854eca2ff1ef55b7f not found: ID does not exist" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.293894 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.293960 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-logs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.294017 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-config-data\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.294036 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6b6s\" (UniqueName: \"kubernetes.io/projected/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-kube-api-access-l6b6s\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.294088 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.294119 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.295093 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-logs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.299267 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-config-data\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.300394 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.304189 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.306025 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.311201 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6b6s\" (UniqueName: \"kubernetes.io/projected/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-kube-api-access-l6b6s\") pod \"nova-api-0\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.402591 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.487780 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.622508 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:07 crc kubenswrapper[4903]: W0128 16:09:07.627757 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f4f5f43_7fbc_41d1_935d_b0844db162a7.slice/crio-b5893882ab8ee781886f5597553395a5570d33496ac7f8e5b32fa3f9a98f7db9 WatchSource:0}: Error finding container b5893882ab8ee781886f5597553395a5570d33496ac7f8e5b32fa3f9a98f7db9: Status 404 returned error can't find the container with id b5893882ab8ee781886f5597553395a5570d33496ac7f8e5b32fa3f9a98f7db9 Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.816523 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f4f5f43-7fbc-41d1-935d-b0844db162a7","Type":"ContainerStarted","Data":"80e37ef3a7839cc1c8d8d21208fac7637eb50268a7239fb7994a6925aeaeb7ef"} Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.816930 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f4f5f43-7fbc-41d1-935d-b0844db162a7","Type":"ContainerStarted","Data":"b5893882ab8ee781886f5597553395a5570d33496ac7f8e5b32fa3f9a98f7db9"} Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.852872 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:07 crc kubenswrapper[4903]: W0128 16:09:07.854476 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ef215ce_85eb_4148_848a_aeb5a15e343e.slice/crio-bb56c9b6e1a6481e0abe288379fb3b1829e392b49dfa9d2d84959732310c1660 WatchSource:0}: Error finding container bb56c9b6e1a6481e0abe288379fb3b1829e392b49dfa9d2d84959732310c1660: Status 404 returned error can't find the container with id bb56c9b6e1a6481e0abe288379fb3b1829e392b49dfa9d2d84959732310c1660 Jan 28 16:09:07 crc kubenswrapper[4903]: W0128 16:09:07.960879 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59f1f4e5_22a4_420b_b6f2_8f936c5c39c9.slice/crio-58bf1c2569224c6c45eb3bce804aee0facc1bc81dc3da87edb6c105a6885bda9 WatchSource:0}: Error finding container 58bf1c2569224c6c45eb3bce804aee0facc1bc81dc3da87edb6c105a6885bda9: Status 404 returned error can't find the container with id 58bf1c2569224c6c45eb3bce804aee0facc1bc81dc3da87edb6c105a6885bda9 Jan 28 16:09:07 crc kubenswrapper[4903]: I0128 16:09:07.964821 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.424002 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56989c3c-0982-4534-9efa-7231440dad98" path="/var/lib/kubelet/pods/56989c3c-0982-4534-9efa-7231440dad98/volumes" Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.425322 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5dad84c-f09f-4430-90cc-febd017d6f72" path="/var/lib/kubelet/pods/d5dad84c-f09f-4430-90cc-febd017d6f72/volumes" Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.426080 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3c0640-ef93-45f3-ad08-771d26117dfc" path="/var/lib/kubelet/pods/de3c0640-ef93-45f3-ad08-771d26117dfc/volumes" Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.826594 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f4f5f43-7fbc-41d1-935d-b0844db162a7","Type":"ContainerStarted","Data":"5c7ed7cd33e049e46f8040cb018864248e1ee41e536bd85ada33bb819a70ed86"} Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.828831 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ef215ce-85eb-4148-848a-aeb5a15e343e","Type":"ContainerStarted","Data":"45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374"} Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.828899 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ef215ce-85eb-4148-848a-aeb5a15e343e","Type":"ContainerStarted","Data":"bb56c9b6e1a6481e0abe288379fb3b1829e392b49dfa9d2d84959732310c1660"} Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.854968 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9","Type":"ContainerStarted","Data":"84cee160ceac6a4ece1e643340f1aeca0d04bc37f045a38f6f21bb0a47361679"} Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.855031 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9","Type":"ContainerStarted","Data":"415bb4f9abcba2194b819d557a32350c24234e674b16185bd86d9dd42b6d9a0b"} Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.855045 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9","Type":"ContainerStarted","Data":"58bf1c2569224c6c45eb3bce804aee0facc1bc81dc3da87edb6c105a6885bda9"} Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.870474 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.870436537 podStartE2EDuration="2.870436537s" podCreationTimestamp="2026-01-28 16:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:09:08.849853045 +0000 UTC m=+1421.125824576" watchObservedRunningTime="2026-01-28 16:09:08.870436537 +0000 UTC m=+1421.146408048" Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.889611 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.889591989 podStartE2EDuration="2.889591989s" podCreationTimestamp="2026-01-28 16:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:09:08.863058055 +0000 UTC m=+1421.139029566" watchObservedRunningTime="2026-01-28 16:09:08.889591989 +0000 UTC m=+1421.165563500" Jan 28 16:09:08 crc kubenswrapper[4903]: I0128 16:09:08.894158 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.894144464 podStartE2EDuration="1.894144464s" podCreationTimestamp="2026-01-28 16:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:09:08.882983109 +0000 UTC m=+1421.158954630" watchObservedRunningTime="2026-01-28 16:09:08.894144464 +0000 UTC m=+1421.170115975" Jan 28 16:09:09 crc kubenswrapper[4903]: I0128 16:09:09.148119 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-szjp7" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="registry-server" probeResult="failure" output=< Jan 28 16:09:09 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 16:09:09 crc kubenswrapper[4903]: > Jan 28 16:09:12 crc kubenswrapper[4903]: I0128 16:09:12.117629 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 16:09:12 crc kubenswrapper[4903]: I0128 16:09:12.118018 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 16:09:12 crc kubenswrapper[4903]: I0128 16:09:12.403623 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 16:09:17 crc kubenswrapper[4903]: I0128 16:09:17.117618 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 16:09:17 crc kubenswrapper[4903]: I0128 16:09:17.118142 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 16:09:17 crc kubenswrapper[4903]: I0128 16:09:17.403843 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 16:09:17 crc kubenswrapper[4903]: I0128 16:09:17.453308 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 16:09:17 crc kubenswrapper[4903]: I0128 16:09:17.488797 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:09:17 crc kubenswrapper[4903]: I0128 16:09:17.488884 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 16:09:17 crc kubenswrapper[4903]: I0128 16:09:17.982965 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 16:09:18 crc kubenswrapper[4903]: I0128 16:09:18.124478 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 16:09:18 crc kubenswrapper[4903]: I0128 16:09:18.130975 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 16:09:18 crc kubenswrapper[4903]: I0128 16:09:18.133325 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:09:18 crc kubenswrapper[4903]: I0128 16:09:18.188750 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:09:18 crc kubenswrapper[4903]: I0128 16:09:18.378898 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-szjp7"] Jan 28 16:09:18 crc kubenswrapper[4903]: I0128 16:09:18.506075 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 16:09:18 crc kubenswrapper[4903]: I0128 16:09:18.506122 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 16:09:19 crc kubenswrapper[4903]: I0128 16:09:19.961423 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-szjp7" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="registry-server" containerID="cri-o://febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9" gracePeriod=2 Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.481431 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.671270 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-utilities\") pod \"35727cb3-f700-42e6-b472-6b84872e40af\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.671397 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-catalog-content\") pod \"35727cb3-f700-42e6-b472-6b84872e40af\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.671562 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s478m\" (UniqueName: \"kubernetes.io/projected/35727cb3-f700-42e6-b472-6b84872e40af-kube-api-access-s478m\") pod \"35727cb3-f700-42e6-b472-6b84872e40af\" (UID: \"35727cb3-f700-42e6-b472-6b84872e40af\") " Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.672338 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-utilities" (OuterVolumeSpecName: "utilities") pod "35727cb3-f700-42e6-b472-6b84872e40af" (UID: "35727cb3-f700-42e6-b472-6b84872e40af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.684091 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35727cb3-f700-42e6-b472-6b84872e40af-kube-api-access-s478m" (OuterVolumeSpecName: "kube-api-access-s478m") pod "35727cb3-f700-42e6-b472-6b84872e40af" (UID: "35727cb3-f700-42e6-b472-6b84872e40af"). InnerVolumeSpecName "kube-api-access-s478m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.774291 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s478m\" (UniqueName: \"kubernetes.io/projected/35727cb3-f700-42e6-b472-6b84872e40af-kube-api-access-s478m\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.774343 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.838921 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35727cb3-f700-42e6-b472-6b84872e40af" (UID: "35727cb3-f700-42e6-b472-6b84872e40af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.875942 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35727cb3-f700-42e6-b472-6b84872e40af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.977774 4903 generic.go:334] "Generic (PLEG): container finished" podID="35727cb3-f700-42e6-b472-6b84872e40af" containerID="febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9" exitCode=0 Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.977849 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szjp7" event={"ID":"35727cb3-f700-42e6-b472-6b84872e40af","Type":"ContainerDied","Data":"febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9"} Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.977897 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-szjp7" event={"ID":"35727cb3-f700-42e6-b472-6b84872e40af","Type":"ContainerDied","Data":"c6b774d1a5d59882caaca8d0fbfe7afbe7ddb86102b12f77d4a54b4fa87591a4"} Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.977935 4903 scope.go:117] "RemoveContainer" containerID="febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9" Jan 28 16:09:20 crc kubenswrapper[4903]: I0128 16:09:20.978126 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-szjp7" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.009112 4903 scope.go:117] "RemoveContainer" containerID="2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.031831 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-szjp7"] Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.040445 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-szjp7"] Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.045373 4903 scope.go:117] "RemoveContainer" containerID="5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.091600 4903 scope.go:117] "RemoveContainer" containerID="febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9" Jan 28 16:09:21 crc kubenswrapper[4903]: E0128 16:09:21.092117 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9\": container with ID starting with febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9 not found: ID does not exist" containerID="febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.092150 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9"} err="failed to get container status \"febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9\": rpc error: code = NotFound desc = could not find container \"febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9\": container with ID starting with febbff7a36b0ccfddb786dd964091f3f9a6a95bf000efa3b5d5ec1779ad846d9 not found: ID does not exist" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.092172 4903 scope.go:117] "RemoveContainer" containerID="2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5" Jan 28 16:09:21 crc kubenswrapper[4903]: E0128 16:09:21.092640 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5\": container with ID starting with 2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5 not found: ID does not exist" containerID="2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.092684 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5"} err="failed to get container status \"2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5\": rpc error: code = NotFound desc = could not find container \"2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5\": container with ID starting with 2749daaf3cbdca9339046bbe272e12717cd4fd34332fa91e2a1f7672ed4d30d5 not found: ID does not exist" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.092716 4903 scope.go:117] "RemoveContainer" containerID="5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05" Jan 28 16:09:21 crc kubenswrapper[4903]: E0128 16:09:21.093047 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05\": container with ID starting with 5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05 not found: ID does not exist" containerID="5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05" Jan 28 16:09:21 crc kubenswrapper[4903]: I0128 16:09:21.093093 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05"} err="failed to get container status \"5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05\": rpc error: code = NotFound desc = could not find container \"5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05\": container with ID starting with 5bc0e4dbfb6ced29f088de0decb248765977a580fd8317917f714c58a88e0c05 not found: ID does not exist" Jan 28 16:09:22 crc kubenswrapper[4903]: I0128 16:09:22.424647 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35727cb3-f700-42e6-b472-6b84872e40af" path="/var/lib/kubelet/pods/35727cb3-f700-42e6-b472-6b84872e40af/volumes" Jan 28 16:09:23 crc kubenswrapper[4903]: I0128 16:09:23.024637 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.123703 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.124055 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.130096 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.130149 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.496431 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.497055 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.499654 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 16:09:27 crc kubenswrapper[4903]: I0128 16:09:27.506642 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 16:09:28 crc kubenswrapper[4903]: I0128 16:09:28.045572 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 16:09:28 crc kubenswrapper[4903]: I0128 16:09:28.055639 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.629177 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.630020 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="e1ce53ab-7d85-47b9-a886-162ef3726997" containerName="openstackclient" containerID="cri-o://79b4ee686b25bbef16eefb66785f1f74ebe67f05a47f44b4dfa49ba85ce6d221" gracePeriod=2 Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.645593 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.648113 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.648333 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="cinder-scheduler" containerID="cri-o://ec452ecafe6bbdf14b8e60c7db18384312eea995612c19c665214db7b6ff8163" gracePeriod=30 Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.648456 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="probe" containerID="cri-o://84e46dfe4c416722411c13edc8cb824e9b50a554e89df0cadc2ab7b6cbd19188" gracePeriod=30 Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.868870 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.938589 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-22c6-account-create-update-6xb8c"] Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.952135 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.952413 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api-log" containerID="cri-o://e3cac4a8f1fa34db395b4644330439522c368c8649ab045e0d9d216976c0e7ee" gracePeriod=30 Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.952568 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api" containerID="cri-o://2cc0c1e09b1d32a98d2dde5eee40318869853a44f68e5250ff8ceb601a48d512" gracePeriod=30 Jan 28 16:09:45 crc kubenswrapper[4903]: E0128 16:09:45.957108 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:45 crc kubenswrapper[4903]: E0128 16:09:45.957168 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data podName:cee6442c-f9ef-4902-b6ec-2bc01a904849 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:46.457149693 +0000 UTC m=+1458.733121204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data") pod "rabbitmq-cell1-server-0" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849") : configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:45 crc kubenswrapper[4903]: I0128 16:09:45.982893 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-22c6-account-create-update-6xb8c"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.046597 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-njdbg"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.106788 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-njdbg"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.151827 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-74cb-account-create-update-s7vzm"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.202961 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-74cb-account-create-update-s7vzm"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.267462 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wwf2t"] Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.268005 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="extract-content" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.268028 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="extract-content" Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.268045 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="registry-server" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.268054 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="registry-server" Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.268085 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="extract-utilities" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.268093 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="extract-utilities" Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.268112 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ce53ab-7d85-47b9-a886-162ef3726997" containerName="openstackclient" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.268119 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ce53ab-7d85-47b9-a886-162ef3726997" containerName="openstackclient" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.268356 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="35727cb3-f700-42e6-b472-6b84872e40af" containerName="registry-server" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.268378 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ce53ab-7d85-47b9-a886-162ef3726997" containerName="openstackclient" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.269153 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.271195 4903 generic.go:334] "Generic (PLEG): container finished" podID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerID="e3cac4a8f1fa34db395b4644330439522c368c8649ab045e0d9d216976c0e7ee" exitCode=143 Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.271248 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"033b894a-46ce-4bd8-b97c-312c8b7c90dd","Type":"ContainerDied","Data":"e3cac4a8f1fa34db395b4644330439522c368c8649ab045e0d9d216976c0e7ee"} Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.271871 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.302789 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-rj86j"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.304129 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.323691 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.346628 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wwf2t"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.377500 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfr77\" (UniqueName: \"kubernetes.io/projected/0ee28286-9cd6-4014-b388-a41d22c5e413-kube-api-access-pfr77\") pod \"root-account-create-update-wwf2t\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.377611 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-operator-scripts\") pod \"nova-cell1-4ff7-account-create-update-rj86j\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.377798 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts\") pod \"root-account-create-update-wwf2t\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.377865 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24t7j\" (UniqueName: \"kubernetes.io/projected/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-kube-api-access-24t7j\") pod \"nova-cell1-4ff7-account-create-update-rj86j\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.406738 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-rj86j"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.455674 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05998e14-d4f9-47d2-b1c7-d563505fa102" path="/var/lib/kubelet/pods/05998e14-d4f9-47d2-b1c7-d563505fa102/volumes" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.459060 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15daf8e2-37c9-4468-85f9-8f47719805c3" path="/var/lib/kubelet/pods/15daf8e2-37c9-4468-85f9-8f47719805c3/volumes" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.463479 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69121677-f86b-414e-bcba-b7e808aff916" path="/var/lib/kubelet/pods/69121677-f86b-414e-bcba-b7e808aff916/volumes" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.464109 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.480642 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24t7j\" (UniqueName: \"kubernetes.io/projected/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-kube-api-access-24t7j\") pod \"nova-cell1-4ff7-account-create-update-rj86j\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.480696 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfr77\" (UniqueName: \"kubernetes.io/projected/0ee28286-9cd6-4014-b388-a41d22c5e413-kube-api-access-pfr77\") pod \"root-account-create-update-wwf2t\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.480764 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-operator-scripts\") pod \"nova-cell1-4ff7-account-create-update-rj86j\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.480924 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts\") pod \"root-account-create-update-wwf2t\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.481597 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts\") pod \"root-account-create-update-wwf2t\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.482104 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-operator-scripts\") pod \"nova-cell1-4ff7-account-create-update-rj86j\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.482892 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.482941 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data podName:cee6442c-f9ef-4902-b6ec-2bc01a904849 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:47.482925485 +0000 UTC m=+1459.758896996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data") pod "rabbitmq-cell1-server-0" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849") : configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.520221 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24t7j\" (UniqueName: \"kubernetes.io/projected/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-kube-api-access-24t7j\") pod \"nova-cell1-4ff7-account-create-update-rj86j\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.542641 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfr77\" (UniqueName: \"kubernetes.io/projected/0ee28286-9cd6-4014-b388-a41d22c5e413-kube-api-access-pfr77\") pod \"root-account-create-update-wwf2t\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.542705 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8brlz"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.556252 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c8dd-account-create-update-zmxgn"] Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.587124 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 16:09:46 crc kubenswrapper[4903]: E0128 16:09:46.587179 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data podName:bb51034c-4387-4aba-8eff-6ff960538da9 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:47.087162578 +0000 UTC m=+1459.363134079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data") pod "rabbitmq-server-0" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9") : configmap "rabbitmq-config-data" not found Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.588322 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8brlz"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.596743 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.626576 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-f6twx"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.651649 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c8dd-account-create-update-zmxgn"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.652079 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.670767 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-f6twx"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.709675 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4zx8t"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.741475 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4zx8t"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.754100 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.754769 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="openstack-network-exporter" containerID="cri-o://ea094faee48284617c25b7bce901cd1d485c8c3eb065114f39cedd97df20a515" gracePeriod=300 Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.770694 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.770929 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="ovn-northd" containerID="cri-o://a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" gracePeriod=30 Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.771040 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="openstack-network-exporter" containerID="cri-o://858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8" gracePeriod=30 Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.786313 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-08ac-account-create-update-vmfnk"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.806626 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-19a1-account-create-update-zgvgv"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.840128 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-08ac-account-create-update-vmfnk"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.878308 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-19a1-account-create-update-zgvgv"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.897776 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="ovsdbserver-sb" containerID="cri-o://b068b0541444e9457126fbba0acffd002fec18d4cbec22a881a9621834e71d6d" gracePeriod=300 Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.905209 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.905993 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="openstack-network-exporter" containerID="cri-o://d8a74584b686d6ab5913a3d1a5bdaf5d4115fabca3b023a2faf39781ba497fbe" gracePeriod=300 Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.928217 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-s958x"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.950512 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-s958x"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.964361 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g8tcr"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.985161 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-sqdt2"] Jan 28 16:09:46 crc kubenswrapper[4903]: I0128 16:09:46.985735 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-sqdt2" podUID="c8080a17-9166-4721-868f-c43799472922" containerName="openstack-network-exporter" containerID="cri-o://38013f51046f369b6687e2c5d59c171aa0431838ce787e24819e162b03bcc631" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.010304 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-gj6nt"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.024180 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-gj6nt"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.036007 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-sdvpf"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.049060 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-z8mw6"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.058293 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-z8mw6"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.068437 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rlbrx"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.094964 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rlbrx"] Jan 28 16:09:47 crc kubenswrapper[4903]: E0128 16:09:47.105137 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 16:09:47 crc kubenswrapper[4903]: E0128 16:09:47.105232 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data podName:bb51034c-4387-4aba-8eff-6ff960538da9 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:48.10518989 +0000 UTC m=+1460.381161401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data") pod "rabbitmq-server-0" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9") : configmap "rabbitmq-config-data" not found Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.225986 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="ovsdbserver-nb" containerID="cri-o://115db9d03452ef27c97e4292c7d8d47526c8e5ede6cf99f55017f73a5b5958ea" gracePeriod=300 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.275745 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tbhp2"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.287376 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tbhp2"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.297547 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-868d5455d4-797gw"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.297820 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-868d5455d4-797gw" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-log" containerID="cri-o://5dd7a851cd619c29827b0ea6cd215ddd77b2818c97ba5045d1ae347a56fe5ca2" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.298306 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-868d5455d4-797gw" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-api" containerID="cri-o://02a42f37dbf91bc71d23efe4fb6af018b9e853e3b220c2f03760e372b14d5184" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.326730 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-df7b7b7fc-j8ps6"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.327007 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-df7b7b7fc-j8ps6" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-api" containerID="cri-o://57f5aead75f7ccb66670a88b340768f4042e67c223d457f4586543c309862540" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.327474 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-df7b7b7fc-j8ps6" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-httpd" containerID="cri-o://7144e9f3e379f3b1c48972a79f95a4ca58fc84bde1c3b98a44aa1c439247a433" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.345771 4903 generic.go:334] "Generic (PLEG): container finished" podID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerID="858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8" exitCode=2 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.345892 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe","Type":"ContainerDied","Data":"858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8"} Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.395575 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sqdt2_c8080a17-9166-4721-868f-c43799472922/openstack-network-exporter/0.log" Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.395625 4903 generic.go:334] "Generic (PLEG): container finished" podID="c8080a17-9166-4721-868f-c43799472922" containerID="38013f51046f369b6687e2c5d59c171aa0431838ce787e24819e162b03bcc631" exitCode=2 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.395697 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sqdt2" event={"ID":"c8080a17-9166-4721-868f-c43799472922","Type":"ContainerDied","Data":"38013f51046f369b6687e2c5d59c171aa0431838ce787e24819e162b03bcc631"} Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.455327 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-zk982"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.457682 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-zk982" podUID="dad42813-08ad-4746-b488-af16a6504561" containerName="dnsmasq-dns" containerID="cri-o://ac5fa928a6299fa4da555a268ab5014fe09528230a48dee3048b346cb50eab23" gracePeriod=10 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.475174 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-tw4vv"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.476290 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_83fe52fb-0760-4173-9567-11d84b522c71/ovsdbserver-sb/0.log" Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.476336 4903 generic.go:334] "Generic (PLEG): container finished" podID="83fe52fb-0760-4173-9567-11d84b522c71" containerID="ea094faee48284617c25b7bce901cd1d485c8c3eb065114f39cedd97df20a515" exitCode=2 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.476355 4903 generic.go:334] "Generic (PLEG): container finished" podID="83fe52fb-0760-4173-9567-11d84b522c71" containerID="b068b0541444e9457126fbba0acffd002fec18d4cbec22a881a9621834e71d6d" exitCode=143 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.476397 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"83fe52fb-0760-4173-9567-11d84b522c71","Type":"ContainerDied","Data":"ea094faee48284617c25b7bce901cd1d485c8c3eb065114f39cedd97df20a515"} Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.476422 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"83fe52fb-0760-4173-9567-11d84b522c71","Type":"ContainerDied","Data":"b068b0541444e9457126fbba0acffd002fec18d4cbec22a881a9621834e71d6d"} Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.489480 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-tw4vv"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.507727 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.508367 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-server" containerID="cri-o://8d6925cdba582789ace3400817f99ef5a11fa5573bf42b9183b2310d83669949" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.508918 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="swift-recon-cron" containerID="cri-o://427c2da60bfa90da8ebbfb150ccfb94366c48918a404ebdd1894102608ea88f1" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.508981 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="rsync" containerID="cri-o://fdfe4956af02ae007c08b5307ab6872b8e0595452ba36784decb8edd4b8a5d9b" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509028 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-expirer" containerID="cri-o://5f7182de515dde6ed72737089f102bb7c64b5bceae2ea9dd0e07b98590e0126b" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509078 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-updater" containerID="cri-o://fddb56423e806702e1b6dee36e7347c017a45be9d08b635bb4e199df0eb3489e" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509121 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-auditor" containerID="cri-o://bbcf62a11c97c0772b915ab52c7b8ed5336a2b9f1735f7d74650ddbac7968b3f" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509162 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-replicator" containerID="cri-o://eebba63abd410036bd2f597b488df5fd3fc712afc83ddb919fb3f33d78e82010" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509203 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-server" containerID="cri-o://49fa880f8fb88d223229db177857faa713b2086ac01e656664ea7ecec2ee6237" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509290 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-updater" containerID="cri-o://987273170f201bd99282bf5c33154171012fac1d73596bce885546d8d13a8681" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509343 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-auditor" containerID="cri-o://eb7902754910c952a0e047350a7096399669542b9269940b5d03b5d9577fabae" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509391 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-replicator" containerID="cri-o://9ec33b0218cbf5be31eaa4605b066cecb134d4131c4136762bbbf8bceaed18e9" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509439 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-server" containerID="cri-o://2077d11c701d11f3d5b9f94bf673c99cd175858ca2ee3f9f5496123712d24aa8" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509482 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-reaper" containerID="cri-o://c78ef9751a8dce58d95c9353ff8051a2fbe27f2886b49daeb6742161a84e3b25" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509519 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-auditor" containerID="cri-o://1902647852c72d50cd7f7eba6e1b998be88fa3e8bce1292d120aa7ad36fcce6a" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.509599 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-replicator" containerID="cri-o://05bd562da8eff098ad5295672772555c223f358c232a73d480a9a4208fbc2f2e" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: E0128 16:09:47.520066 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:47 crc kubenswrapper[4903]: E0128 16:09:47.520143 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data podName:cee6442c-f9ef-4902-b6ec-2bc01a904849 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:49.520122649 +0000 UTC m=+1461.796094170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data") pod "rabbitmq-cell1-server-0" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849") : configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.525233 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0e9123e0-08c8-4892-8378-4f99799d7dfc/ovsdbserver-nb/0.log" Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.525287 4903 generic.go:334] "Generic (PLEG): container finished" podID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerID="d8a74584b686d6ab5913a3d1a5bdaf5d4115fabca3b023a2faf39781ba497fbe" exitCode=2 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.525320 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e9123e0-08c8-4892-8378-4f99799d7dfc","Type":"ContainerDied","Data":"d8a74584b686d6ab5913a3d1a5bdaf5d4115fabca3b023a2faf39781ba497fbe"} Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.528062 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gxgmt"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.543392 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-gxgmt"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.580700 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-ws2qb"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.595092 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-ws2qb"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.654595 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.722453 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rmt7b"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.727036 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerName="rabbitmq" containerID="cri-o://cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee" gracePeriod=604800 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.772633 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rmt7b"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.862610 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8479-account-create-update-7qbbj"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.881521 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8479-account-create-update-7qbbj"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.906278 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ddd577785-zk982" podUID="dad42813-08ad-4746-b488-af16a6504561" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.195:5353: connect: connection refused" Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.925835 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.926114 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9ef215ce-85eb-4148-848a-aeb5a15e343e" containerName="nova-scheduler-scheduler" containerID="cri-o://45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374" gracePeriod=30 Jan 28 16:09:47 crc kubenswrapper[4903]: I0128 16:09:47.992632 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.001857 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sqdt2_c8080a17-9166-4721-868f-c43799472922/openstack-network-exporter/0.log" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.001932 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.047799 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.048295 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-log" containerID="cri-o://415bb4f9abcba2194b819d557a32350c24234e674b16185bd86d9dd42b6d9a0b" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.048432 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-api" containerID="cri-o://84cee160ceac6a4ece1e643340f1aeca0d04bc37f045a38f6f21bb0a47361679" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.072036 4903 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 16:09:48 crc kubenswrapper[4903]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: if [ -n "" ]; then Jan 28 16:09:48 crc kubenswrapper[4903]: GRANT_DATABASE="" Jan 28 16:09:48 crc kubenswrapper[4903]: else Jan 28 16:09:48 crc kubenswrapper[4903]: GRANT_DATABASE="*" Jan 28 16:09:48 crc kubenswrapper[4903]: fi Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: # going for maximum compatibility here: Jan 28 16:09:48 crc kubenswrapper[4903]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 16:09:48 crc kubenswrapper[4903]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 16:09:48 crc kubenswrapper[4903]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 16:09:48 crc kubenswrapper[4903]: # support updates Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: $MYSQL_CMD < logger="UnhandledError" Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.075643 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-wwf2t" podUID="0ee28286-9cd6-4014-b388-a41d22c5e413" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.081364 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_83fe52fb-0760-4173-9567-11d84b522c71/ovsdbserver-sb/0.log" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.081437 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.115399 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.115774 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-log" containerID="cri-o://80e37ef3a7839cc1c8d8d21208fac7637eb50268a7239fb7994a6925aeaeb7ef" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.116008 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-metadata" containerID="cri-o://5c7ed7cd33e049e46f8040cb018864248e1ee41e536bd85ada33bb819a70ed86" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.134438 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovs-rundir\") pod \"c8080a17-9166-4721-868f-c43799472922\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.134547 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8080a17-9166-4721-868f-c43799472922-config\") pod \"c8080a17-9166-4721-868f-c43799472922\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.134605 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-metrics-certs-tls-certs\") pod \"c8080a17-9166-4721-868f-c43799472922\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.134626 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovn-rundir\") pod \"c8080a17-9166-4721-868f-c43799472922\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.134646 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkrdb\" (UniqueName: \"kubernetes.io/projected/c8080a17-9166-4721-868f-c43799472922-kube-api-access-fkrdb\") pod \"c8080a17-9166-4721-868f-c43799472922\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.134735 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-combined-ca-bundle\") pod \"c8080a17-9166-4721-868f-c43799472922\" (UID: \"c8080a17-9166-4721-868f-c43799472922\") " Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.135238 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.135294 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data podName:bb51034c-4387-4aba-8eff-6ff960538da9 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:50.13527987 +0000 UTC m=+1462.411251381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data") pod "rabbitmq-server-0" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9") : configmap "rabbitmq-config-data" not found Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.135338 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "c8080a17-9166-4721-868f-c43799472922" (UID: "c8080a17-9166-4721-868f-c43799472922"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.138289 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "c8080a17-9166-4721-868f-c43799472922" (UID: "c8080a17-9166-4721-868f-c43799472922"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.142273 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8080a17-9166-4721-868f-c43799472922-config" (OuterVolumeSpecName: "config") pod "c8080a17-9166-4721-868f-c43799472922" (UID: "c8080a17-9166-4721-868f-c43799472922"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.184411 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8080a17-9166-4721-868f-c43799472922-kube-api-access-fkrdb" (OuterVolumeSpecName: "kube-api-access-fkrdb") pod "c8080a17-9166-4721-868f-c43799472922" (UID: "c8080a17-9166-4721-868f-c43799472922"). InnerVolumeSpecName "kube-api-access-fkrdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.190780 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.191285 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-log" containerID="cri-o://c34ec1bdca9dcf388b45d4df31616bfc2ee16b7a70a6f94f04662492238c5d30" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.194754 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-httpd" containerID="cri-o://b0fb34b235f11adc68d9beed30603f223ccc79ee9902295559769c17c5aa973b" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.212904 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-fwdxv"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.224755 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-fwdxv"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.231591 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.231868 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-log" containerID="cri-o://09c605d6038ace2063cd36abb755adc5f02bf5408e796a180094c2237ab62208" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.232296 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-httpd" containerID="cri-o://5294340766b49118b122c18adf127768d2b7a2248eea8752adcf1bf834f406c1" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246087 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-combined-ca-bundle\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246253 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-config\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246301 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-metrics-certs-tls-certs\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246400 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-scripts\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246440 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-ovsdbserver-sb-tls-certs\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246459 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p7hl\" (UniqueName: \"kubernetes.io/projected/83fe52fb-0760-4173-9567-11d84b522c71-kube-api-access-9p7hl\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246480 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83fe52fb-0760-4173-9567-11d84b522c71-ovsdb-rundir\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246499 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"83fe52fb-0760-4173-9567-11d84b522c71\" (UID: \"83fe52fb-0760-4173-9567-11d84b522c71\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246863 4903 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246880 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8080a17-9166-4721-868f-c43799472922-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246888 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8080a17-9166-4721-868f-c43799472922-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.246899 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkrdb\" (UniqueName: \"kubernetes.io/projected/c8080a17-9166-4721-868f-c43799472922-kube-api-access-fkrdb\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.247054 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q5hf4"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.255330 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wwf2t"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.264414 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-config" (OuterVolumeSpecName: "config") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.265003 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-scripts" (OuterVolumeSpecName: "scripts") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.265539 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83fe52fb-0760-4173-9567-11d84b522c71-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.275720 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q5hf4"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.282827 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cmxmp"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.285290 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fe52fb-0760-4173-9567-11d84b522c71-kube-api-access-9p7hl" (OuterVolumeSpecName: "kube-api-access-9p7hl") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "kube-api-access-9p7hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.292945 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cmxmp"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.299813 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.302602 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jpjph"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.307309 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-rj86j"] Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.312357 4903 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 16:09:48 crc kubenswrapper[4903]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: if [ -n "nova_cell1" ]; then Jan 28 16:09:48 crc kubenswrapper[4903]: GRANT_DATABASE="nova_cell1" Jan 28 16:09:48 crc kubenswrapper[4903]: else Jan 28 16:09:48 crc kubenswrapper[4903]: GRANT_DATABASE="*" Jan 28 16:09:48 crc kubenswrapper[4903]: fi Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: # going for maximum compatibility here: Jan 28 16:09:48 crc kubenswrapper[4903]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 16:09:48 crc kubenswrapper[4903]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 16:09:48 crc kubenswrapper[4903]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 16:09:48 crc kubenswrapper[4903]: # support updates Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: $MYSQL_CMD < logger="UnhandledError" Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.313581 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" podUID="baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.318587 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jpjph"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.353935 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-sp6mn"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.354044 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.354066 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p7hl\" (UniqueName: \"kubernetes.io/projected/83fe52fb-0760-4173-9567-11d84b522c71-kube-api-access-9p7hl\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.354076 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83fe52fb-0760-4173-9567-11d84b522c71-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.354095 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.354104 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fe52fb-0760-4173-9567-11d84b522c71-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.412876 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8080a17-9166-4721-868f-c43799472922" (UID: "c8080a17-9166-4721-868f-c43799472922"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.472641 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.472868 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee18582-19e5-4d9a-8fcf-bf69d8efa384" path="/var/lib/kubelet/pods/2ee18582-19e5-4d9a-8fcf-bf69d8efa384/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.473564 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30606f8f-095e-47cc-8784-9ea99eaf293a" path="/var/lib/kubelet/pods/30606f8f-095e-47cc-8784-9ea99eaf293a/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.474165 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f168baf-cfa3-4403-825f-ed1a8e92beca" path="/var/lib/kubelet/pods/3f168baf-cfa3-4403-825f-ed1a8e92beca/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.474778 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4756c433-f387-49e6-ada4-56bec03547c5" path="/var/lib/kubelet/pods/4756c433-f387-49e6-ada4-56bec03547c5/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.476004 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c325698-a4a2-4f1b-a865-e37be6610791" path="/var/lib/kubelet/pods/6c325698-a4a2-4f1b-a865-e37be6610791/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.476666 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83949796-38e0-4cd4-8358-d2198dd7dfb8" path="/var/lib/kubelet/pods/83949796-38e0-4cd4-8358-d2198dd7dfb8/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.476978 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.477417 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff3c2fe-30ce-45ce-938e-9b94c7549522" path="/var/lib/kubelet/pods/8ff3c2fe-30ce-45ce-938e-9b94c7549522/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.478866 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a966d5dc-c13b-4925-bc59-64f40ee7f334" path="/var/lib/kubelet/pods/a966d5dc-c13b-4925-bc59-64f40ee7f334/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.479464 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9ffd7e-7027-4e36-ad58-163afe824cc5" path="/var/lib/kubelet/pods/ac9ffd7e-7027-4e36-ad58-163afe824cc5/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.480081 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0f5c51-bd2a-4640-b0e3-a826d45a28d6" path="/var/lib/kubelet/pods/ad0f5c51-bd2a-4640-b0e3-a826d45a28d6/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.481293 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8934da-e18b-43bc-8a6d-11973760064f" path="/var/lib/kubelet/pods/af8934da-e18b-43bc-8a6d-11973760064f/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.481908 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22d11dd-8c6a-4114-bb95-d62054670010" path="/var/lib/kubelet/pods/b22d11dd-8c6a-4114-bb95-d62054670010/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.482822 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b18699-4922-43a6-a149-b0c33642f6dc" path="/var/lib/kubelet/pods/c1b18699-4922-43a6-a149-b0c33642f6dc/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.483403 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c841b377-a95f-4533-bcd3-4f5a53a36301" path="/var/lib/kubelet/pods/c841b377-a95f-4533-bcd3-4f5a53a36301/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.484660 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0f3bda-8e27-4887-b3e2-8b04b92d65b2" path="/var/lib/kubelet/pods/ca0f3bda-8e27-4887-b3e2-8b04b92d65b2/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.489737 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee91865-9bfc-44d2-a0e3-87a4b309ad7e" path="/var/lib/kubelet/pods/cee91865-9bfc-44d2-a0e3-87a4b309ad7e/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.491228 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42c5032-0edb-4f98-b937-d4bc09ad513a" path="/var/lib/kubelet/pods/d42c5032-0edb-4f98-b937-d4bc09ad513a/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.492222 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4dbcd08-6def-4380-8cc4-93a156624deb" path="/var/lib/kubelet/pods/d4dbcd08-6def-4380-8cc4-93a156624deb/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.492751 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" containerID="cri-o://2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" gracePeriod=29 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.493039 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4df0a14-2dcb-43de-8f3d-26b25f189888" path="/var/lib/kubelet/pods/d4df0a14-2dcb-43de-8f3d-26b25f189888/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.494366 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb48c98-8877-4be3-b406-096222fd33e6" path="/var/lib/kubelet/pods/fbb48c98-8877-4be3-b406-096222fd33e6/volumes" Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.526204 4903 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 28 16:09:48 crc kubenswrapper[4903]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 16:09:48 crc kubenswrapper[4903]: + source /usr/local/bin/container-scripts/functions Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNBridge=br-int Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNRemote=tcp:localhost:6642 Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNEncapType=geneve Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNAvailabilityZones= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ EnableChassisAsGateway=true Jan 28 16:09:48 crc kubenswrapper[4903]: ++ PhysicalNetworks= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNHostName= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 16:09:48 crc kubenswrapper[4903]: ++ ovs_dir=/var/lib/openvswitch Jan 28 16:09:48 crc kubenswrapper[4903]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 16:09:48 crc kubenswrapper[4903]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 16:09:48 crc kubenswrapper[4903]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + sleep 0.5 Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + sleep 0.5 Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + cleanup_ovsdb_server_semaphore Jan 28 16:09:48 crc kubenswrapper[4903]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 16:09:48 crc kubenswrapper[4903]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 16:09:48 crc kubenswrapper[4903]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-sdvpf" message=< Jan 28 16:09:48 crc kubenswrapper[4903]: Exiting ovsdb-server (5) [ OK ] Jan 28 16:09:48 crc kubenswrapper[4903]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 16:09:48 crc kubenswrapper[4903]: + source /usr/local/bin/container-scripts/functions Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNBridge=br-int Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNRemote=tcp:localhost:6642 Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNEncapType=geneve Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNAvailabilityZones= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ EnableChassisAsGateway=true Jan 28 16:09:48 crc kubenswrapper[4903]: ++ PhysicalNetworks= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNHostName= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 16:09:48 crc kubenswrapper[4903]: ++ ovs_dir=/var/lib/openvswitch Jan 28 16:09:48 crc kubenswrapper[4903]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 16:09:48 crc kubenswrapper[4903]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 16:09:48 crc kubenswrapper[4903]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + sleep 0.5 Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + sleep 0.5 Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + cleanup_ovsdb_server_semaphore Jan 28 16:09:48 crc kubenswrapper[4903]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 16:09:48 crc kubenswrapper[4903]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 16:09:48 crc kubenswrapper[4903]: > Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.526261 4903 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 28 16:09:48 crc kubenswrapper[4903]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 16:09:48 crc kubenswrapper[4903]: + source /usr/local/bin/container-scripts/functions Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNBridge=br-int Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNRemote=tcp:localhost:6642 Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNEncapType=geneve Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNAvailabilityZones= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ EnableChassisAsGateway=true Jan 28 16:09:48 crc kubenswrapper[4903]: ++ PhysicalNetworks= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ OVNHostName= Jan 28 16:09:48 crc kubenswrapper[4903]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 16:09:48 crc kubenswrapper[4903]: ++ ovs_dir=/var/lib/openvswitch Jan 28 16:09:48 crc kubenswrapper[4903]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 16:09:48 crc kubenswrapper[4903]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 16:09:48 crc kubenswrapper[4903]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + sleep 0.5 Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + sleep 0.5 Jan 28 16:09:48 crc kubenswrapper[4903]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 16:09:48 crc kubenswrapper[4903]: + cleanup_ovsdb_server_semaphore Jan 28 16:09:48 crc kubenswrapper[4903]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 16:09:48 crc kubenswrapper[4903]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 16:09:48 crc kubenswrapper[4903]: > pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" containerID="cri-o://7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.526304 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" containerID="cri-o://7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" gracePeriod=29 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.539343 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.561505 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c8080a17-9166-4721-868f-c43799472922" (UID: "c8080a17-9166-4721-868f-c43799472922"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.568239 4903 generic.go:334] "Generic (PLEG): container finished" podID="777a1f56-3b78-4161-b388-22d924bf442c" containerID="7144e9f3e379f3b1c48972a79f95a4ca58fc84bde1c3b98a44aa1c439247a433" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.574931 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.574960 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8080a17-9166-4721-868f-c43799472922-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.574976 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.595760 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0e9123e0-08c8-4892-8378-4f99799d7dfc/ovsdbserver-nb/0.log" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.595804 4903 generic.go:334] "Generic (PLEG): container finished" podID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerID="115db9d03452ef27c97e4292c7d8d47526c8e5ede6cf99f55017f73a5b5958ea" exitCode=143 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.599565 4903 generic.go:334] "Generic (PLEG): container finished" podID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerID="09c605d6038ace2063cd36abb755adc5f02bf5408e796a180094c2237ab62208" exitCode=143 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.610024 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613575 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df7b7b7fc-j8ps6" event={"ID":"777a1f56-3b78-4161-b388-22d924bf442c","Type":"ContainerDied","Data":"7144e9f3e379f3b1c48972a79f95a4ca58fc84bde1c3b98a44aa1c439247a433"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613615 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-sp6mn"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613636 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e9123e0-08c8-4892-8378-4f99799d7dfc","Type":"ContainerDied","Data":"115db9d03452ef27c97e4292c7d8d47526c8e5ede6cf99f55017f73a5b5958ea"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613651 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0e9123e0-08c8-4892-8378-4f99799d7dfc","Type":"ContainerDied","Data":"241a14bfcffaec67bfbc29bf999917853c1f332e6731e9181a7583490b0918fd"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613661 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="241a14bfcffaec67bfbc29bf999917853c1f332e6731e9181a7583490b0918fd" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613671 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613684 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd","Type":"ContainerDied","Data":"09c605d6038ace2063cd36abb755adc5f02bf5408e796a180094c2237ab62208"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613696 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cd9f7788c-9rhk8"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613708 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79d7544958-xm4mt"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613719 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c6e6-account-create-update-st6gx"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613730 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c6e6-account-create-update-st6gx"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613739 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6jgbm"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613748 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-698d7dfbbb-d88kl"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613761 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6jgbm"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613770 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613780 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xvrh9"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.613928 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="d3c39267-5b08-4783-b267-7ee6395020f2" containerName="nova-cell1-conductor-conductor" containerID="cri-o://632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614225 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker-log" containerID="cri-o://6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614347 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79d7544958-xm4mt" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api-log" containerID="cri-o://f5c9a79fdf1fdd76ebd49ee1d6512d0b2f33149f5da0dd564a2edc3e7102a0f1" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614392 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker" containerID="cri-o://deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614441 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79d7544958-xm4mt" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api" containerID="cri-o://4ab5c17cdbc07a22bc6e3f55c4de9ca0284d8300cd938b4df77da1ec21f7ea19" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614467 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener-log" containerID="cri-o://6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614781 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener" containerID="cri-o://c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614794 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-zk982" event={"ID":"dad42813-08ad-4746-b488-af16a6504561","Type":"ContainerDied","Data":"ac5fa928a6299fa4da555a268ab5014fe09528230a48dee3048b346cb50eab23"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.614784 4903 generic.go:334] "Generic (PLEG): container finished" podID="dad42813-08ad-4746-b488-af16a6504561" containerID="ac5fa928a6299fa4da555a268ab5014fe09528230a48dee3048b346cb50eab23" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.620931 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.621123 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2d08ed75-05f7-4c45-bc6e-0562a7bbb936" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.636473 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0e9123e0-08c8-4892-8378-4f99799d7dfc/ovsdbserver-nb/0.log" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.636568 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.636904 4903 generic.go:334] "Generic (PLEG): container finished" podID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerID="c34ec1bdca9dcf388b45d4df31616bfc2ee16b7a70a6f94f04662492238c5d30" exitCode=143 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.636998 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c3ca866-aac2-4b4f-ac25-71e741d9db2f","Type":"ContainerDied","Data":"c34ec1bdca9dcf388b45d4df31616bfc2ee16b7a70a6f94f04662492238c5d30"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.642575 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xvrh9"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.652538 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wwf2t"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.659820 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.660014 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1f8d7105-dc30-4ef6-b862-eb67eefd4026" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.674145 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_83fe52fb-0760-4173-9567-11d84b522c71/ovsdbserver-sb/0.log" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.674331 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.674453 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"83fe52fb-0760-4173-9567-11d84b522c71","Type":"ContainerDied","Data":"00cec65ead8c82961cd9c1c98242f4731fe55cebb4d82482780f47599df2c142"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.674582 4903 scope.go:117] "RemoveContainer" containerID="ea094faee48284617c25b7bce901cd1d485c8c3eb065114f39cedd97df20a515" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.676466 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.679616 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-rj86j"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.684886 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" event={"ID":"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0","Type":"ContainerStarted","Data":"22d579d5e62274d1f0d4fbae036c99a4561b5eace631d3c8930a78f13b94cb3f"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.696841 4903 generic.go:334] "Generic (PLEG): container finished" podID="e1ce53ab-7d85-47b9-a886-162ef3726997" containerID="79b4ee686b25bbef16eefb66785f1f74ebe67f05a47f44b4dfa49ba85ce6d221" exitCode=137 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.699876 4903 generic.go:334] "Generic (PLEG): container finished" podID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerID="415bb4f9abcba2194b819d557a32350c24234e674b16185bd86d9dd42b6d9a0b" exitCode=143 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.699929 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9","Type":"ContainerDied","Data":"415bb4f9abcba2194b819d557a32350c24234e674b16185bd86d9dd42b6d9a0b"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.701584 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wwf2t" event={"ID":"0ee28286-9cd6-4014-b388-a41d22c5e413","Type":"ContainerStarted","Data":"72b2bf49789b69fa882ddd87f89c37c5436c9eea9ee535f86db16d810b943d9d"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.702223 4903 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-wwf2t" secret="" err="secret \"galera-openstack-cell1-dockercfg-tnv27\" not found" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.704036 4903 generic.go:334] "Generic (PLEG): container finished" podID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerID="5dd7a851cd619c29827b0ea6cd215ddd77b2818c97ba5045d1ae347a56fe5ca2" exitCode=143 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.704072 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-868d5455d4-797gw" event={"ID":"d91d56c5-1ada-417a-8a87-dc4e3960a186","Type":"ContainerDied","Data":"5dd7a851cd619c29827b0ea6cd215ddd77b2818c97ba5045d1ae347a56fe5ca2"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713210 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="fdfe4956af02ae007c08b5307ab6872b8e0595452ba36784decb8edd4b8a5d9b" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713232 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="5f7182de515dde6ed72737089f102bb7c64b5bceae2ea9dd0e07b98590e0126b" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713240 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="fddb56423e806702e1b6dee36e7347c017a45be9d08b635bb4e199df0eb3489e" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713246 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="bbcf62a11c97c0772b915ab52c7b8ed5336a2b9f1735f7d74650ddbac7968b3f" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713252 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="eebba63abd410036bd2f597b488df5fd3fc712afc83ddb919fb3f33d78e82010" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713258 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="49fa880f8fb88d223229db177857faa713b2086ac01e656664ea7ecec2ee6237" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713264 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="987273170f201bd99282bf5c33154171012fac1d73596bce885546d8d13a8681" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713270 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="eb7902754910c952a0e047350a7096399669542b9269940b5d03b5d9577fabae" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713276 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="9ec33b0218cbf5be31eaa4605b066cecb134d4131c4136762bbbf8bceaed18e9" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713282 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="2077d11c701d11f3d5b9f94bf673c99cd175858ca2ee3f9f5496123712d24aa8" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713288 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="c78ef9751a8dce58d95c9353ff8051a2fbe27f2886b49daeb6742161a84e3b25" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713293 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="1902647852c72d50cd7f7eba6e1b998be88fa3e8bce1292d120aa7ad36fcce6a" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713299 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="05bd562da8eff098ad5295672772555c223f358c232a73d480a9a4208fbc2f2e" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713305 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="8d6925cdba582789ace3400817f99ef5a11fa5573bf42b9183b2310d83669949" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713339 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"fdfe4956af02ae007c08b5307ab6872b8e0595452ba36784decb8edd4b8a5d9b"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713359 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"5f7182de515dde6ed72737089f102bb7c64b5bceae2ea9dd0e07b98590e0126b"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713369 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"fddb56423e806702e1b6dee36e7347c017a45be9d08b635bb4e199df0eb3489e"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713377 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"bbcf62a11c97c0772b915ab52c7b8ed5336a2b9f1735f7d74650ddbac7968b3f"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713386 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"eebba63abd410036bd2f597b488df5fd3fc712afc83ddb919fb3f33d78e82010"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713395 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"49fa880f8fb88d223229db177857faa713b2086ac01e656664ea7ecec2ee6237"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713403 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"987273170f201bd99282bf5c33154171012fac1d73596bce885546d8d13a8681"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713412 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"eb7902754910c952a0e047350a7096399669542b9269940b5d03b5d9577fabae"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713420 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"9ec33b0218cbf5be31eaa4605b066cecb134d4131c4136762bbbf8bceaed18e9"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713429 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"2077d11c701d11f3d5b9f94bf673c99cd175858ca2ee3f9f5496123712d24aa8"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713438 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"c78ef9751a8dce58d95c9353ff8051a2fbe27f2886b49daeb6742161a84e3b25"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713446 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"1902647852c72d50cd7f7eba6e1b998be88fa3e8bce1292d120aa7ad36fcce6a"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713453 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"05bd562da8eff098ad5295672772555c223f358c232a73d480a9a4208fbc2f2e"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.713462 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"8d6925cdba582789ace3400817f99ef5a11fa5573bf42b9183b2310d83669949"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.724675 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.726332 4903 generic.go:334] "Generic (PLEG): container finished" podID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerID="84e46dfe4c416722411c13edc8cb824e9b50a554e89df0cadc2ab7b6cbd19188" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.726478 4903 generic.go:334] "Generic (PLEG): container finished" podID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerID="ec452ecafe6bbdf14b8e60c7db18384312eea995612c19c665214db7b6ff8163" exitCode=0 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.726587 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"967fdf30-3d73-4e3f-9056-e270e10d3213","Type":"ContainerDied","Data":"84e46dfe4c416722411c13edc8cb824e9b50a554e89df0cadc2ab7b6cbd19188"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.726682 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"967fdf30-3d73-4e3f-9056-e270e10d3213","Type":"ContainerDied","Data":"ec452ecafe6bbdf14b8e60c7db18384312eea995612c19c665214db7b6ff8163"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.728361 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sqdt2_c8080a17-9166-4721-868f-c43799472922/openstack-network-exporter/0.log" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.728429 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sqdt2" event={"ID":"c8080a17-9166-4721-868f-c43799472922","Type":"ContainerDied","Data":"773359f6f7505f538c002ad4062bfa1b612aeb5709fb9efdc39360e460746594"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.728516 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sqdt2" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.742592 4903 scope.go:117] "RemoveContainer" containerID="b068b0541444e9457126fbba0acffd002fec18d4cbec22a881a9621834e71d6d" Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.743193 4903 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 16:09:48 crc kubenswrapper[4903]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: if [ -n "" ]; then Jan 28 16:09:48 crc kubenswrapper[4903]: GRANT_DATABASE="" Jan 28 16:09:48 crc kubenswrapper[4903]: else Jan 28 16:09:48 crc kubenswrapper[4903]: GRANT_DATABASE="*" Jan 28 16:09:48 crc kubenswrapper[4903]: fi Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: # going for maximum compatibility here: Jan 28 16:09:48 crc kubenswrapper[4903]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 16:09:48 crc kubenswrapper[4903]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 16:09:48 crc kubenswrapper[4903]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 16:09:48 crc kubenswrapper[4903]: # support updates Jan 28 16:09:48 crc kubenswrapper[4903]: Jan 28 16:09:48 crc kubenswrapper[4903]: $MYSQL_CMD < logger="UnhandledError" Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.745802 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-wwf2t" podUID="0ee28286-9cd6-4014-b388-a41d22c5e413" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.749569 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" containerName="rabbitmq" containerID="cri-o://3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef" gracePeriod=604800 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.766077 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerName="galera" containerID="cri-o://794515d4b47b412812a3f26bee010ffe855a15147bcf38cac1153e75b984d927" gracePeriod=30 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.768614 4903 generic.go:334] "Generic (PLEG): container finished" podID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerID="80e37ef3a7839cc1c8d8d21208fac7637eb50268a7239fb7994a6925aeaeb7ef" exitCode=143 Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.768655 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f4f5f43-7fbc-41d1-935d-b0844db162a7","Type":"ContainerDied","Data":"80e37ef3a7839cc1c8d8d21208fac7637eb50268a7239fb7994a6925aeaeb7ef"} Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.771654 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "83fe52fb-0760-4173-9567-11d84b522c71" (UID: "83fe52fb-0760-4173-9567-11d84b522c71"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.773700 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.791998 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69gf8\" (UniqueName: \"kubernetes.io/projected/0e9123e0-08c8-4892-8378-4f99799d7dfc-kube-api-access-69gf8\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.792104 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-scripts\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.792149 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.792231 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-config\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.792288 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdb-rundir\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.792327 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdbserver-nb-tls-certs\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.792366 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-metrics-certs-tls-certs\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.792409 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.793134 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-config" (OuterVolumeSpecName: "config") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.793181 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.795763 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-scripts" (OuterVolumeSpecName: "scripts") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.795782 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.801885 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.804838 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.804870 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.804881 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83fe52fb-0760-4173-9567-11d84b522c71-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.804891 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e9123e0-08c8-4892-8378-4f99799d7dfc-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.804913 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.813865 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9123e0-08c8-4892-8378-4f99799d7dfc-kube-api-access-69gf8" (OuterVolumeSpecName: "kube-api-access-69gf8") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "kube-api-access-69gf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.827365 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-sqdt2"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.852584 4903 scope.go:117] "RemoveContainer" containerID="38013f51046f369b6687e2c5d59c171aa0431838ce787e24819e162b03bcc631" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.874622 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-sqdt2"] Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.876607 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907006 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-config\") pod \"dad42813-08ad-4746-b488-af16a6504561\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907139 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data-custom\") pod \"967fdf30-3d73-4e3f-9056-e270e10d3213\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907159 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data\") pod \"967fdf30-3d73-4e3f-9056-e270e10d3213\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907213 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-nb\") pod \"dad42813-08ad-4746-b488-af16a6504561\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907476 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config-secret\") pod \"e1ce53ab-7d85-47b9-a886-162ef3726997\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907496 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-sb\") pod \"dad42813-08ad-4746-b488-af16a6504561\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907513 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-scripts\") pod \"967fdf30-3d73-4e3f-9056-e270e10d3213\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.907549 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-combined-ca-bundle\") pod \"e1ce53ab-7d85-47b9-a886-162ef3726997\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908252 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908292 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snhs4\" (UniqueName: \"kubernetes.io/projected/967fdf30-3d73-4e3f-9056-e270e10d3213-kube-api-access-snhs4\") pod \"967fdf30-3d73-4e3f-9056-e270e10d3213\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908330 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-svc\") pod \"dad42813-08ad-4746-b488-af16a6504561\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908359 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle\") pod \"0e9123e0-08c8-4892-8378-4f99799d7dfc\" (UID: \"0e9123e0-08c8-4892-8378-4f99799d7dfc\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908384 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-swift-storage-0\") pod \"dad42813-08ad-4746-b488-af16a6504561\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908435 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/967fdf30-3d73-4e3f-9056-e270e10d3213-etc-machine-id\") pod \"967fdf30-3d73-4e3f-9056-e270e10d3213\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908466 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dnvr\" (UniqueName: \"kubernetes.io/projected/e1ce53ab-7d85-47b9-a886-162ef3726997-kube-api-access-4dnvr\") pod \"e1ce53ab-7d85-47b9-a886-162ef3726997\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908509 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkbjt\" (UniqueName: \"kubernetes.io/projected/dad42813-08ad-4746-b488-af16a6504561-kube-api-access-jkbjt\") pod \"dad42813-08ad-4746-b488-af16a6504561\" (UID: \"dad42813-08ad-4746-b488-af16a6504561\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908556 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-combined-ca-bundle\") pod \"967fdf30-3d73-4e3f-9056-e270e10d3213\" (UID: \"967fdf30-3d73-4e3f-9056-e270e10d3213\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.908587 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config\") pod \"e1ce53ab-7d85-47b9-a886-162ef3726997\" (UID: \"e1ce53ab-7d85-47b9-a886-162ef3726997\") " Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.909444 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.909460 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69gf8\" (UniqueName: \"kubernetes.io/projected/0e9123e0-08c8-4892-8378-4f99799d7dfc-kube-api-access-69gf8\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.929480 4903 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 28 16:09:48 crc kubenswrapper[4903]: E0128 16:09:48.929590 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts podName:0ee28286-9cd6-4014-b388-a41d22c5e413 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:49.429571957 +0000 UTC m=+1461.705543468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts") pod "root-account-create-update-wwf2t" (UID: "0ee28286-9cd6-4014-b388-a41d22c5e413") : configmap "openstack-cell1-scripts" not found Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.929885 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/967fdf30-3d73-4e3f-9056-e270e10d3213-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "967fdf30-3d73-4e3f-9056-e270e10d3213" (UID: "967fdf30-3d73-4e3f-9056-e270e10d3213"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: W0128 16:09:48.933194 4903 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0e9123e0-08c8-4892-8378-4f99799d7dfc/volumes/kubernetes.io~secret/combined-ca-bundle Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.933226 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.940711 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ce53ab-7d85-47b9-a886-162ef3726997-kube-api-access-4dnvr" (OuterVolumeSpecName: "kube-api-access-4dnvr") pod "e1ce53ab-7d85-47b9-a886-162ef3726997" (UID: "e1ce53ab-7d85-47b9-a886-162ef3726997"). InnerVolumeSpecName "kube-api-access-4dnvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.941414 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad42813-08ad-4746-b488-af16a6504561-kube-api-access-jkbjt" (OuterVolumeSpecName: "kube-api-access-jkbjt") pod "dad42813-08ad-4746-b488-af16a6504561" (UID: "dad42813-08ad-4746-b488-af16a6504561"). InnerVolumeSpecName "kube-api-access-jkbjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.957973 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967fdf30-3d73-4e3f-9056-e270e10d3213-kube-api-access-snhs4" (OuterVolumeSpecName: "kube-api-access-snhs4") pod "967fdf30-3d73-4e3f-9056-e270e10d3213" (UID: "967fdf30-3d73-4e3f-9056-e270e10d3213"). InnerVolumeSpecName "kube-api-access-snhs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.960274 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "967fdf30-3d73-4e3f-9056-e270e10d3213" (UID: "967fdf30-3d73-4e3f-9056-e270e10d3213"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.960451 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-scripts" (OuterVolumeSpecName: "scripts") pod "967fdf30-3d73-4e3f-9056-e270e10d3213" (UID: "967fdf30-3d73-4e3f-9056-e270e10d3213"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.972775 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.973177 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dad42813-08ad-4746-b488-af16a6504561" (UID: "dad42813-08ad-4746-b488-af16a6504561"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4903]: I0128 16:09:48.985782 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0e9123e0-08c8-4892-8378-4f99799d7dfc" (UID: "0e9123e0-08c8-4892-8378-4f99799d7dfc"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011735 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011796 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011807 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011816 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011870 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snhs4\" (UniqueName: \"kubernetes.io/projected/967fdf30-3d73-4e3f-9056-e270e10d3213-kube-api-access-snhs4\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011887 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9123e0-08c8-4892-8378-4f99799d7dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011901 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/967fdf30-3d73-4e3f-9056-e270e10d3213-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011935 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dnvr\" (UniqueName: \"kubernetes.io/projected/e1ce53ab-7d85-47b9-a886-162ef3726997-kube-api-access-4dnvr\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011948 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkbjt\" (UniqueName: \"kubernetes.io/projected/dad42813-08ad-4746-b488-af16a6504561-kube-api-access-jkbjt\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.011960 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.029590 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.040133 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e1ce53ab-7d85-47b9-a886-162ef3726997" (UID: "e1ce53ab-7d85-47b9-a886-162ef3726997"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.041425 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1ce53ab-7d85-47b9-a886-162ef3726997" (UID: "e1ce53ab-7d85-47b9-a886-162ef3726997"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.046805 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.063253 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dad42813-08ad-4746-b488-af16a6504561" (UID: "dad42813-08ad-4746-b488-af16a6504561"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.064876 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dad42813-08ad-4746-b488-af16a6504561" (UID: "dad42813-08ad-4746-b488-af16a6504561"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.065465 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dad42813-08ad-4746-b488-af16a6504561" (UID: "dad42813-08ad-4746-b488-af16a6504561"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.079640 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "967fdf30-3d73-4e3f-9056-e270e10d3213" (UID: "967fdf30-3d73-4e3f-9056-e270e10d3213"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.115760 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.115792 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.115804 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.115816 4903 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.115828 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.115839 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.121702 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-config" (OuterVolumeSpecName: "config") pod "dad42813-08ad-4746-b488-af16a6504561" (UID: "dad42813-08ad-4746-b488-af16a6504561"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.137479 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e1ce53ab-7d85-47b9-a886-162ef3726997" (UID: "e1ce53ab-7d85-47b9-a886-162ef3726997"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.246559 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e1ce53ab-7d85-47b9-a886-162ef3726997-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.246594 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dad42813-08ad-4746-b488-af16a6504561-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.258398 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-867d8c4cc5-vz4lw"] Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.258778 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-httpd" containerID="cri-o://7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2" gracePeriod=30 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.259049 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-server" containerID="cri-o://d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4" gracePeriod=30 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.318595 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data" (OuterVolumeSpecName: "config-data") pod "967fdf30-3d73-4e3f-9056-e270e10d3213" (UID: "967fdf30-3d73-4e3f-9056-e270e10d3213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.348502 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967fdf30-3d73-4e3f-9056-e270e10d3213-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.455433 4903 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.455507 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts podName:0ee28286-9cd6-4014-b388-a41d22c5e413 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:50.455489114 +0000 UTC m=+1462.731460625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts") pod "root-account-create-update-wwf2t" (UID: "0ee28286-9cd6-4014-b388-a41d22c5e413") : configmap "openstack-cell1-scripts" not found Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.528753 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.559482 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.559683 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data podName:cee6442c-f9ef-4902-b6ec-2bc01a904849 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:53.559661775 +0000 UTC m=+1465.835633286 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data") pod "rabbitmq-cell1-server-0" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849") : configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.670074 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-operator-scripts\") pod \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.670118 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24t7j\" (UniqueName: \"kubernetes.io/projected/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-kube-api-access-24t7j\") pod \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\" (UID: \"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0\") " Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.671261 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0" (UID: "baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.675599 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-kube-api-access-24t7j" (OuterVolumeSpecName: "kube-api-access-24t7j") pod "baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0" (UID: "baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0"). InnerVolumeSpecName "kube-api-access-24t7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.734830 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.774626 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9ls\" (UniqueName: \"kubernetes.io/projected/1f8d7105-dc30-4ef6-b862-eb67eefd4026-kube-api-access-cs9ls\") pod \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.774700 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-config-data\") pod \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.774718 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-combined-ca-bundle\") pod \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.774844 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-nova-novncproxy-tls-certs\") pod \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.774889 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-vencrypt-tls-certs\") pod \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\" (UID: \"1f8d7105-dc30-4ef6-b862-eb67eefd4026\") " Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.775340 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.775356 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24t7j\" (UniqueName: \"kubernetes.io/projected/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0-kube-api-access-24t7j\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.778907 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8d7105-dc30-4ef6-b862-eb67eefd4026-kube-api-access-cs9ls" (OuterVolumeSpecName: "kube-api-access-cs9ls") pod "1f8d7105-dc30-4ef6-b862-eb67eefd4026" (UID: "1f8d7105-dc30-4ef6-b862-eb67eefd4026"). InnerVolumeSpecName "kube-api-access-cs9ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.817160 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-config-data" (OuterVolumeSpecName: "config-data") pod "1f8d7105-dc30-4ef6-b862-eb67eefd4026" (UID: "1f8d7105-dc30-4ef6-b862-eb67eefd4026"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.821342 4903 generic.go:334] "Generic (PLEG): container finished" podID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerID="794515d4b47b412812a3f26bee010ffe855a15147bcf38cac1153e75b984d927" exitCode=0 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.821406 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1423eabe-b2af-4a42-a38e-d5c1c53e7845","Type":"ContainerDied","Data":"794515d4b47b412812a3f26bee010ffe855a15147bcf38cac1153e75b984d927"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.823401 4903 scope.go:117] "RemoveContainer" containerID="79b4ee686b25bbef16eefb66785f1f74ebe67f05a47f44b4dfa49ba85ce6d221" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.823588 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.854079 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"967fdf30-3d73-4e3f-9056-e270e10d3213","Type":"ContainerDied","Data":"4842cbaff69ee464c8b52f74112164325757b5ceb67640132bbf740fd1b347bc"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.854239 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.860816 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f8d7105-dc30-4ef6-b862-eb67eefd4026" (UID: "1f8d7105-dc30-4ef6-b862-eb67eefd4026"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.869063 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" event={"ID":"baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0","Type":"ContainerDied","Data":"22d579d5e62274d1f0d4fbae036c99a4561b5eace631d3c8930a78f13b94cb3f"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.869165 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ff7-account-create-update-rj86j" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.878964 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs9ls\" (UniqueName: \"kubernetes.io/projected/1f8d7105-dc30-4ef6-b862-eb67eefd4026-kube-api-access-cs9ls\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.879003 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.879016 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.893878 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-zk982" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.894177 4903 scope.go:117] "RemoveContainer" containerID="84e46dfe4c416722411c13edc8cb824e9b50a554e89df0cadc2ab7b6cbd19188" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.894444 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-zk982" event={"ID":"dad42813-08ad-4746-b488-af16a6504561","Type":"ContainerDied","Data":"d046ae0a0501f3b550adf715db850dd87f629f2ee82a870ede30ad87c4e9f9f6"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.908342 4903 generic.go:334] "Generic (PLEG): container finished" podID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerID="7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2" exitCode=0 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.908435 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" event={"ID":"bf32204d-973f-4397-8fbe-8b155f1f6f52","Type":"ContainerDied","Data":"7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.909159 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "1f8d7105-dc30-4ef6-b862-eb67eefd4026" (UID: "1f8d7105-dc30-4ef6-b862-eb67eefd4026"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.913073 4903 generic.go:334] "Generic (PLEG): container finished" podID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerID="6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228" exitCode=143 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.913142 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" event={"ID":"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9","Type":"ContainerDied","Data":"6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.918212 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "1f8d7105-dc30-4ef6-b862-eb67eefd4026" (UID: "1f8d7105-dc30-4ef6-b862-eb67eefd4026"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.926809 4903 generic.go:334] "Generic (PLEG): container finished" podID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerID="2cc0c1e09b1d32a98d2dde5eee40318869853a44f68e5250ff8ceb601a48d512" exitCode=0 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.926892 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"033b894a-46ce-4bd8-b97c-312c8b7c90dd","Type":"ContainerDied","Data":"2cc0c1e09b1d32a98d2dde5eee40318869853a44f68e5250ff8ceb601a48d512"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.943085 4903 generic.go:334] "Generic (PLEG): container finished" podID="1f8d7105-dc30-4ef6-b862-eb67eefd4026" containerID="8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e" exitCode=0 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.943133 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.943195 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f8d7105-dc30-4ef6-b862-eb67eefd4026","Type":"ContainerDied","Data":"8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.943223 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1f8d7105-dc30-4ef6-b862-eb67eefd4026","Type":"ContainerDied","Data":"c9790dea5b32c1b6a9f9a411a0fa3cf1d686b63fc4fea92be53f8b53c2e57f69"} Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.949757 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.955997 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.958448 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.958705 4903 generic.go:334] "Generic (PLEG): container finished" podID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" exitCode=0 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.958769 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sdvpf" event={"ID":"87970b20-51e0-4e11-875a-8dea3b633ac5","Type":"ContainerDied","Data":"7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d"} Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.961757 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 16:09:49 crc kubenswrapper[4903]: E0128 16:09:49.961803 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="ovn-northd" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.974078 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.977790 4903 scope.go:117] "RemoveContainer" containerID="ec452ecafe6bbdf14b8e60c7db18384312eea995612c19c665214db7b6ff8163" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.978006 4903 generic.go:334] "Generic (PLEG): container finished" podID="64646a57-b496-4bf3-8b63-d53321316304" containerID="6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82" exitCode=143 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.978088 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" event={"ID":"64646a57-b496-4bf3-8b63-d53321316304","Type":"ContainerDied","Data":"6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.982501 4903 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.982554 4903 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8d7105-dc30-4ef6-b862-eb67eefd4026-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.984506 4903 generic.go:334] "Generic (PLEG): container finished" podID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerID="f5c9a79fdf1fdd76ebd49ee1d6512d0b2f33149f5da0dd564a2edc3e7102a0f1" exitCode=143 Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.984622 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.988552 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d7544958-xm4mt" event={"ID":"438d1db6-7b20-4f31-8a43-aa8f0c972501","Type":"ContainerDied","Data":"f5c9a79fdf1fdd76ebd49ee1d6512d0b2f33149f5da0dd564a2edc3e7102a0f1"} Jan 28 16:09:49 crc kubenswrapper[4903]: I0128 16:09:49.995424 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-zk982"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.008014 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-zk982"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.091176 4903 scope.go:117] "RemoveContainer" containerID="ac5fa928a6299fa4da555a268ab5014fe09528230a48dee3048b346cb50eab23" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.099318 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.144383 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.177594 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.194923 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.194996 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data podName:bb51034c-4387-4aba-8eff-6ff960538da9 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:54.194978806 +0000 UTC m=+1466.470950317 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data") pod "rabbitmq-server-0" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9") : configmap "rabbitmq-config-data" not found Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.195714 4903 scope.go:117] "RemoveContainer" containerID="a67f772dccae3b47ab1f4d72830713aa1130fd35e60e57d62e1f436580945a77" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.202128 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.223215 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-rj86j"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.236965 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4ff7-account-create-update-rj86j"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.335437 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.351337 4903 scope.go:117] "RemoveContainer" containerID="8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.398668 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-combined-ca-bundle\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399564 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-public-tls-certs\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399618 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399643 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/033b894a-46ce-4bd8-b97c-312c8b7c90dd-etc-machine-id\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399711 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data-custom\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399740 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-scripts\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399761 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-internal-tls-certs\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399796 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cz8b\" (UniqueName: \"kubernetes.io/projected/033b894a-46ce-4bd8-b97c-312c8b7c90dd-kube-api-access-9cz8b\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.399891 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/033b894a-46ce-4bd8-b97c-312c8b7c90dd-logs\") pod \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\" (UID: \"033b894a-46ce-4bd8-b97c-312c8b7c90dd\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.401656 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/033b894a-46ce-4bd8-b97c-312c8b7c90dd-logs" (OuterVolumeSpecName: "logs") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.404556 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/033b894a-46ce-4bd8-b97c-312c8b7c90dd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.417945 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8d7105_dc30_4ef6_b862_eb67eefd4026.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf32204d_973f_4397_8fbe_8b155f1f6f52.slice/crio-d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf32204d_973f_4397_8fbe_8b155f1f6f52.slice/crio-conmon-d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddad42813_08ad_4746_b488_af16a6504561.slice/crio-d046ae0a0501f3b550adf715db850dd87f629f2ee82a870ede30ad87c4e9f9f6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e9123e0_08c8_4892_8378_4f99799d7dfc.slice/crio-241a14bfcffaec67bfbc29bf999917853c1f332e6731e9181a7583490b0918fd\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod967fdf30_3d73_4e3f_9056_e270e10d3213.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddad42813_08ad_4746_b488_af16a6504561.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaf4cea7_8229_45bd_9c03_c7d4e5c2e9a0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod967fdf30_3d73_4e3f_9056_e270e10d3213.slice/crio-4842cbaff69ee464c8b52f74112164325757b5ceb67640132bbf740fd1b347bc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e9123e0_08c8_4892_8378_4f99799d7dfc.slice\": RecentStats: unable to find data in memory cache]" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.432793 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" path="/var/lib/kubelet/pods/0e9123e0-08c8-4892-8378-4f99799d7dfc/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.434306 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cff8440-59d9-4491-ae2e-2568b28d8ae3" path="/var/lib/kubelet/pods/1cff8440-59d9-4491-ae2e-2568b28d8ae3/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.434846 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8d7105-dc30-4ef6-b862-eb67eefd4026" path="/var/lib/kubelet/pods/1f8d7105-dc30-4ef6-b862-eb67eefd4026/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.435365 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f38f215-5d58-4933-90c7-ccf27a223339" path="/var/lib/kubelet/pods/7f38f215-5d58-4933-90c7-ccf27a223339/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.437217 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83fe52fb-0760-4173-9567-11d84b522c71" path="/var/lib/kubelet/pods/83fe52fb-0760-4173-9567-11d84b522c71/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.437909 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b91a6df-a714-4199-b4dc-3b9ecf398074" path="/var/lib/kubelet/pods/8b91a6df-a714-4199-b4dc-3b9ecf398074/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.438390 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d476df-369e-428e-945d-f2a3dc1a78ea" path="/var/lib/kubelet/pods/94d476df-369e-428e-945d-f2a3dc1a78ea/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.439645 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" path="/var/lib/kubelet/pods/967fdf30-3d73-4e3f-9056-e270e10d3213/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.440393 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0" path="/var/lib/kubelet/pods/baf4cea7-8229-45bd-9c03-c7d4e5c2e9a0/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.440776 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8080a17-9166-4721-868f-c43799472922" path="/var/lib/kubelet/pods/c8080a17-9166-4721-868f-c43799472922/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.441462 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad42813-08ad-4746-b488-af16a6504561" path="/var/lib/kubelet/pods/dad42813-08ad-4746-b488-af16a6504561/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.442606 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ce53ab-7d85-47b9-a886-162ef3726997" path="/var/lib/kubelet/pods/e1ce53ab-7d85-47b9-a886-162ef3726997/volumes" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.444491 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.446651 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.476326 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-scripts" (OuterVolumeSpecName: "scripts") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.476465 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033b894a-46ce-4bd8-b97c-312c8b7c90dd-kube-api-access-9cz8b" (OuterVolumeSpecName: "kube-api-access-9cz8b") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "kube-api-access-9cz8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.488196 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.490704 4903 scope.go:117] "RemoveContainer" containerID="8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.492416 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e\": container with ID starting with 8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e not found: ID does not exist" containerID="8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.492459 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e"} err="failed to get container status \"8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e\": rpc error: code = NotFound desc = could not find container \"8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e\": container with ID starting with 8a01a2846c1d8a1904b3cef25694f7726c930a201a4574af4f2979822448976e not found: ID does not exist" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.501774 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kolla-config\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.501824 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-galera-tls-certs\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.501945 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rqn\" (UniqueName: \"kubernetes.io/projected/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kube-api-access-h4rqn\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.501971 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-combined-ca-bundle\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502006 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-generated\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502043 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-default\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502093 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-operator-scripts\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502221 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\" (UID: \"1423eabe-b2af-4a42-a38e-d5c1c53e7845\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502468 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502764 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502786 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/033b894a-46ce-4bd8-b97c-312c8b7c90dd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502802 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502813 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502824 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cz8b\" (UniqueName: \"kubernetes.io/projected/033b894a-46ce-4bd8-b97c-312c8b7c90dd-kube-api-access-9cz8b\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502838 4903 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502848 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/033b894a-46ce-4bd8-b97c-312c8b7c90dd-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.502913 4903 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.502937 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.502962 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts podName:0ee28286-9cd6-4014-b388-a41d22c5e413 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:52.502945757 +0000 UTC m=+1464.778917268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts") pod "root-account-create-update-wwf2t" (UID: "0ee28286-9cd6-4014-b388-a41d22c5e413") : configmap "openstack-cell1-scripts" not found Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.503373 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.504044 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.512189 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kube-api-access-h4rqn" (OuterVolumeSpecName: "kube-api-access-h4rqn") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "kube-api-access-h4rqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.526955 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.555852 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.557260 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.557321 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data" (OuterVolumeSpecName: "config-data") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.563277 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.568736 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.568809 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="d3c39267-5b08-4783-b267-7ee6395020f2" containerName="nova-cell1-conductor-conductor" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.584918 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "1423eabe-b2af-4a42-a38e-d5c1c53e7845" (UID: "1423eabe-b2af-4a42-a38e-d5c1c53e7845"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.589710 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604862 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604887 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1423eabe-b2af-4a42-a38e-d5c1c53e7845-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604897 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604905 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604924 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604933 4903 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604944 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rqn\" (UniqueName: \"kubernetes.io/projected/1423eabe-b2af-4a42-a38e-d5c1c53e7845-kube-api-access-h4rqn\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604955 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1423eabe-b2af-4a42-a38e-d5c1c53e7845-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.604966 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1423eabe-b2af-4a42-a38e-d5c1c53e7845-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.631417 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "033b894a-46ce-4bd8-b97c-312c8b7c90dd" (UID: "033b894a-46ce-4bd8-b97c-312c8b7c90dd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.642667 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.693733 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.707864 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfr77\" (UniqueName: \"kubernetes.io/projected/0ee28286-9cd6-4014-b388-a41d22c5e413-kube-api-access-pfr77\") pod \"0ee28286-9cd6-4014-b388-a41d22c5e413\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.707980 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts\") pod \"0ee28286-9cd6-4014-b388-a41d22c5e413\" (UID: \"0ee28286-9cd6-4014-b388-a41d22c5e413\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.708682 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ee28286-9cd6-4014-b388-a41d22c5e413" (UID: "0ee28286-9cd6-4014-b388-a41d22c5e413"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.708813 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ee28286-9cd6-4014-b388-a41d22c5e413-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.708835 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.708844 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/033b894a-46ce-4bd8-b97c-312c8b7c90dd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.714018 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee28286-9cd6-4014-b388-a41d22c5e413-kube-api-access-pfr77" (OuterVolumeSpecName: "kube-api-access-pfr77") pod "0ee28286-9cd6-4014-b388-a41d22c5e413" (UID: "0ee28286-9cd6-4014-b388-a41d22c5e413"). InnerVolumeSpecName "kube-api-access-pfr77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.773200 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.810119 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-etc-swift\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.810484 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-config-data\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.810630 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-log-httpd\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.810727 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcb9x\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-kube-api-access-qcb9x\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.810834 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-internal-tls-certs\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.810901 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-public-tls-certs\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.810996 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-run-httpd\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.811127 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-combined-ca-bundle\") pod \"bf32204d-973f-4397-8fbe-8b155f1f6f52\" (UID: \"bf32204d-973f-4397-8fbe-8b155f1f6f52\") " Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.811681 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfr77\" (UniqueName: \"kubernetes.io/projected/0ee28286-9cd6-4014-b388-a41d22c5e413-kube-api-access-pfr77\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.816569 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.816922 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.827124 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.831885 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-kube-api-access-qcb9x" (OuterVolumeSpecName: "kube-api-access-qcb9x") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "kube-api-access-qcb9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.881798 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.893292 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.901726 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-588cq"] Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.902600 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.902753 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.902868 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-httpd" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.902965 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-httpd" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.903183 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.903257 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.903344 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8080a17-9166-4721-868f-c43799472922" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.903412 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8080a17-9166-4721-868f-c43799472922" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.903487 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad42813-08ad-4746-b488-af16a6504561" containerName="dnsmasq-dns" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.903562 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad42813-08ad-4746-b488-af16a6504561" containerName="dnsmasq-dns" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.903635 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="cinder-scheduler" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.903713 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="cinder-scheduler" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.903807 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerName="galera" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.903892 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerName="galera" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.903970 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.904035 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.904106 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="ovsdbserver-sb" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.904167 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="ovsdbserver-sb" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.904238 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="probe" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.904303 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="probe" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.904394 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerName="mysql-bootstrap" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.904489 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerName="mysql-bootstrap" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.904576 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="ovsdbserver-nb" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.904642 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="ovsdbserver-nb" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.904710 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad42813-08ad-4746-b488-af16a6504561" containerName="init" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.904774 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad42813-08ad-4746-b488-af16a6504561" containerName="init" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.904856 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-server" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.904941 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-server" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.905020 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api-log" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.905082 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api-log" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.905135 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8d7105-dc30-4ef6-b862-eb67eefd4026" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.905216 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8d7105-dc30-4ef6-b862-eb67eefd4026" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.905587 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.905675 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.905769 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="ovsdbserver-sb" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.905860 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-httpd" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.905938 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8d7105-dc30-4ef6-b862-eb67eefd4026" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906023 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="cinder-scheduler" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906091 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="967fdf30-3d73-4e3f-9056-e270e10d3213" containerName="probe" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906209 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fe52fb-0760-4173-9567-11d84b522c71" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906308 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" containerName="cinder-api-log" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906393 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-server" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906461 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" containerName="galera" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906542 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8080a17-9166-4721-868f-c43799472922" containerName="openstack-network-exporter" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906608 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9123e0-08c8-4892-8378-4f99799d7dfc" containerName="ovsdbserver-nb" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.906675 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad42813-08ad-4746-b488-af16a6504561" containerName="dnsmasq-dns" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.907519 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-588cq" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.903644 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-config-data" (OuterVolumeSpecName: "config-data") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.910727 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.910751 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-588cq"] Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.913841 4903 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.913946 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.914130 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.914193 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcb9x\" (UniqueName: \"kubernetes.io/projected/bf32204d-973f-4397-8fbe-8b155f1f6f52-kube-api-access-qcb9x\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.914256 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.914313 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.914368 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf32204d-973f-4397-8fbe-8b155f1f6f52-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:50 crc kubenswrapper[4903]: I0128 16:09:50.923453 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf32204d-973f-4397-8fbe-8b155f1f6f52" (UID: "bf32204d-973f-4397-8fbe-8b155f1f6f52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.964313 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e is running failed: container process not found" containerID="d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.964839 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e is running failed: container process not found" containerID="d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.968956 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e is running failed: container process not found" containerID="d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 16:09:50 crc kubenswrapper[4903]: E0128 16:09:50.968995 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="2d08ed75-05f7-4c45-bc6e-0562a7bbb936" containerName="nova-cell0-conductor-conductor" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.016444 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frjxk\" (UniqueName: \"kubernetes.io/projected/cad02107-7c85-434e-aeb2-ab7a9924743d-kube-api-access-frjxk\") pod \"root-account-create-update-588cq\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " pod="openstack/root-account-create-update-588cq" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.016545 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad02107-7c85-434e-aeb2-ab7a9924743d-operator-scripts\") pod \"root-account-create-update-588cq\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " pod="openstack/root-account-create-update-588cq" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.017133 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf32204d-973f-4397-8fbe-8b155f1f6f52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.021153 4903 generic.go:334] "Generic (PLEG): container finished" podID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerID="d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4" exitCode=0 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.021224 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.021245 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" event={"ID":"bf32204d-973f-4397-8fbe-8b155f1f6f52","Type":"ContainerDied","Data":"d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4"} Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.021279 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" event={"ID":"bf32204d-973f-4397-8fbe-8b155f1f6f52","Type":"ContainerDied","Data":"e8432814af98cdd38786133fdb7e2fcd90313e16de2dcdc3be05676c6460116e"} Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.021299 4903 scope.go:117] "RemoveContainer" containerID="d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.037255 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wwf2t" event={"ID":"0ee28286-9cd6-4014-b388-a41d22c5e413","Type":"ContainerDied","Data":"72b2bf49789b69fa882ddd87f89c37c5436c9eea9ee535f86db16d810b943d9d"} Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.037277 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wwf2t" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.043246 4903 generic.go:334] "Generic (PLEG): container finished" podID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerID="02a42f37dbf91bc71d23efe4fb6af018b9e853e3b220c2f03760e372b14d5184" exitCode=0 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.043309 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-868d5455d4-797gw" event={"ID":"d91d56c5-1ada-417a-8a87-dc4e3960a186","Type":"ContainerDied","Data":"02a42f37dbf91bc71d23efe4fb6af018b9e853e3b220c2f03760e372b14d5184"} Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.051517 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1423eabe-b2af-4a42-a38e-d5c1c53e7845","Type":"ContainerDied","Data":"284fa6bde6207697796351bd6359745370a4c4c885896c026ef246c2e04bb7b7"} Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.051655 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.053504 4903 generic.go:334] "Generic (PLEG): container finished" podID="2d08ed75-05f7-4c45-bc6e-0562a7bbb936" containerID="d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e" exitCode=0 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.053597 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d08ed75-05f7-4c45-bc6e-0562a7bbb936","Type":"ContainerDied","Data":"d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e"} Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.066030 4903 scope.go:117] "RemoveContainer" containerID="7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.078646 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-867d8c4cc5-vz4lw"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.087137 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"033b894a-46ce-4bd8-b97c-312c8b7c90dd","Type":"ContainerDied","Data":"54c34f0381bdb2bdbed9efb44ef91575724d291b31882420c8bd36b933ea7a12"} Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.087634 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.118920 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frjxk\" (UniqueName: \"kubernetes.io/projected/cad02107-7c85-434e-aeb2-ab7a9924743d-kube-api-access-frjxk\") pod \"root-account-create-update-588cq\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " pod="openstack/root-account-create-update-588cq" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.119191 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad02107-7c85-434e-aeb2-ab7a9924743d-operator-scripts\") pod \"root-account-create-update-588cq\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " pod="openstack/root-account-create-update-588cq" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.120297 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad02107-7c85-434e-aeb2-ab7a9924743d-operator-scripts\") pod \"root-account-create-update-588cq\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " pod="openstack/root-account-create-update-588cq" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.128585 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-867d8c4cc5-vz4lw"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.135063 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frjxk\" (UniqueName: \"kubernetes.io/projected/cad02107-7c85-434e-aeb2-ab7a9924743d-kube-api-access-frjxk\") pod \"root-account-create-update-588cq\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " pod="openstack/root-account-create-update-588cq" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.179139 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wwf2t"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.194466 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wwf2t"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.222177 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.226087 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-central-agent" containerID="cri-o://4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1" gracePeriod=30 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.228512 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="proxy-httpd" containerID="cri-o://245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77" gracePeriod=30 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.228696 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-notification-agent" containerID="cri-o://843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e" gracePeriod=30 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.228736 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="sg-core" containerID="cri-o://f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978" gracePeriod=30 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.231269 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-588cq" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.286680 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.287091 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" containerName="kube-state-metrics" containerID="cri-o://02154f1f0b54e0cfa5dae5ad4eb9c57e22b0da30380c0810c189562bfe3ae25b" gracePeriod=30 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.323823 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.354839 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.370905 4903 scope.go:117] "RemoveContainer" containerID="d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.373048 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.380868 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4\": container with ID starting with d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4 not found: ID does not exist" containerID="d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.380925 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4"} err="failed to get container status \"d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4\": rpc error: code = NotFound desc = could not find container \"d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4\": container with ID starting with d9ab6dd17c6d7bfff9ddd687eb405ed981ed8d6d62842bad45e8a7cfd740aab4 not found: ID does not exist" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.380963 4903 scope.go:117] "RemoveContainer" containerID="7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2" Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.381078 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.394874 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2\": container with ID starting with 7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2 not found: ID does not exist" containerID="7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.394928 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2"} err="failed to get container status \"7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2\": rpc error: code = NotFound desc = could not find container \"7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2\": container with ID starting with 7bc8418fbdef990a8b4fc6314948a5d50e9ae6d2aef9d281b26a3ffcb5774cb2 not found: ID does not exist" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.394960 4903 scope.go:117] "RemoveContainer" containerID="794515d4b47b412812a3f26bee010ffe855a15147bcf38cac1153e75b984d927" Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.395094 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.396072 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.396134 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.402966 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.416749 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.443067 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.449866 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fd05-account-create-update-vs6jd"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.476342 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.477020 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="bac3a1bb-718a-42b1-9c87-71258a05b083" containerName="memcached" containerID="cri-o://22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7" gracePeriod=30 Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.484266 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.484323 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.505973 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fd05-account-create-update-vs6jd"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.524719 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fd05-account-create-update-8x6l5"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.526404 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.532340 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.555624 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fd05-account-create-update-8x6l5"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.567657 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5pnjn"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.571477 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4nj4\" (UniqueName: \"kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.571544 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.583373 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mrfh5"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.592130 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.602508 4903 scope.go:117] "RemoveContainer" containerID="da1778879f20d0c4622f1b8c62b20be5cfe0c84babdca30cc0f05f8464fed3f0" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.606239 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5pnjn"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.609493 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g8tcr" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerName="ovn-controller" probeResult="failure" output=< Jan 28 16:09:51 crc kubenswrapper[4903]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 28 16:09:51 crc kubenswrapper[4903]: > Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.615512 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mrfh5"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.631521 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-55866f486f-t9ft2"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.632089 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-55866f486f-t9ft2" podUID="1f6d6643-926c-4d0d-8986-a7c56e748e3f" containerName="keystone-api" containerID="cri-o://468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4" gracePeriod=30 Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.649602 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.672509 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-internal-tls-certs\") pod \"d91d56c5-1ada-417a-8a87-dc4e3960a186\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.672685 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d91d56c5-1ada-417a-8a87-dc4e3960a186-logs\") pod \"d91d56c5-1ada-417a-8a87-dc4e3960a186\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.672725 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-public-tls-certs\") pod \"d91d56c5-1ada-417a-8a87-dc4e3960a186\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.672757 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-scripts\") pod \"d91d56c5-1ada-417a-8a87-dc4e3960a186\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.672782 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcrk5\" (UniqueName: \"kubernetes.io/projected/d91d56c5-1ada-417a-8a87-dc4e3960a186-kube-api-access-dcrk5\") pod \"d91d56c5-1ada-417a-8a87-dc4e3960a186\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.672853 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-config-data\") pod \"d91d56c5-1ada-417a-8a87-dc4e3960a186\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.672934 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-combined-ca-bundle\") pod \"d91d56c5-1ada-417a-8a87-dc4e3960a186\" (UID: \"d91d56c5-1ada-417a-8a87-dc4e3960a186\") " Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.673382 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nj4\" (UniqueName: \"kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.673422 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.673597 4903 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.673669 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts podName:c89d22ea-a410-48a7-9af5-08dce403a809 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:52.173647603 +0000 UTC m=+1464.449619114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts") pod "keystone-fd05-account-create-update-8x6l5" (UID: "c89d22ea-a410-48a7-9af5-08dce403a809") : configmap "openstack-scripts" not found Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.681076 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fd05-account-create-update-8x6l5"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.682519 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d91d56c5-1ada-417a-8a87-dc4e3960a186-logs" (OuterVolumeSpecName: "logs") pod "d91d56c5-1ada-417a-8a87-dc4e3960a186" (UID: "d91d56c5-1ada-417a-8a87-dc4e3960a186"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.684880 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-h4nj4 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-fd05-account-create-update-8x6l5" podUID="c89d22ea-a410-48a7-9af5-08dce403a809" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.685122 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-scripts" (OuterVolumeSpecName: "scripts") pod "d91d56c5-1ada-417a-8a87-dc4e3960a186" (UID: "d91d56c5-1ada-417a-8a87-dc4e3960a186"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.685943 4903 projected.go:194] Error preparing data for projected volume kube-api-access-h4nj4 for pod openstack/keystone-fd05-account-create-update-8x6l5: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 16:09:51 crc kubenswrapper[4903]: E0128 16:09:51.686003 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4 podName:c89d22ea-a410-48a7-9af5-08dce403a809 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:52.18598366 +0000 UTC m=+1464.461955171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h4nj4" (UniqueName: "kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4") pod "keystone-fd05-account-create-update-8x6l5" (UID: "c89d22ea-a410-48a7-9af5-08dce403a809") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.687925 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5lmfj"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.696639 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5lmfj"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.703892 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d91d56c5-1ada-417a-8a87-dc4e3960a186-kube-api-access-dcrk5" (OuterVolumeSpecName: "kube-api-access-dcrk5") pod "d91d56c5-1ada-417a-8a87-dc4e3960a186" (UID: "d91d56c5-1ada-417a-8a87-dc4e3960a186"). InnerVolumeSpecName "kube-api-access-dcrk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.704175 4903 scope.go:117] "RemoveContainer" containerID="2cc0c1e09b1d32a98d2dde5eee40318869853a44f68e5250ff8ceb601a48d512" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.705088 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-588cq"] Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.775019 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d91d56c5-1ada-417a-8a87-dc4e3960a186-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.775375 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.775386 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcrk5\" (UniqueName: \"kubernetes.io/projected/d91d56c5-1ada-417a-8a87-dc4e3960a186-kube-api-access-dcrk5\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.826823 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-config-data" (OuterVolumeSpecName: "config-data") pod "d91d56c5-1ada-417a-8a87-dc4e3960a186" (UID: "d91d56c5-1ada-417a-8a87-dc4e3960a186"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.877723 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.918993 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79d7544958-xm4mt" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:49362->10.217.0.157:9311: read: connection reset by peer" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.918992 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79d7544958-xm4mt" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:49374->10.217.0.157:9311: read: connection reset by peer" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.927890 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d91d56c5-1ada-417a-8a87-dc4e3960a186" (UID: "d91d56c5-1ada-417a-8a87-dc4e3960a186"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.933014 4903 scope.go:117] "RemoveContainer" containerID="e3cac4a8f1fa34db395b4644330439522c368c8649ab045e0d9d216976c0e7ee" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.943741 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d91d56c5-1ada-417a-8a87-dc4e3960a186" (UID: "d91d56c5-1ada-417a-8a87-dc4e3960a186"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.953743 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d91d56c5-1ada-417a-8a87-dc4e3960a186" (UID: "d91d56c5-1ada-417a-8a87-dc4e3960a186"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.980628 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.980669 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:51 crc kubenswrapper[4903]: I0128 16:09:51.980684 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d91d56c5-1ada-417a-8a87-dc4e3960a186-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.008922 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerName="galera" containerID="cri-o://0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" gracePeriod=30 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.117190 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": dial tcp 10.217.0.201:8775: connect: connection refused" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.117454 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": dial tcp 10.217.0.201:8775: connect: connection refused" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.206905 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nj4\" (UniqueName: \"kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.206956 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.207149 4903 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.207201 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts podName:c89d22ea-a410-48a7-9af5-08dce403a809 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:53.207186927 +0000 UTC m=+1465.483158438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts") pod "keystone-fd05-account-create-update-8x6l5" (UID: "c89d22ea-a410-48a7-9af5-08dce403a809") : configmap "openstack-scripts" not found Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.212265 4903 projected.go:194] Error preparing data for projected volume kube-api-access-h4nj4 for pod openstack/keystone-fd05-account-create-update-8x6l5: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.212850 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4 podName:c89d22ea-a410-48a7-9af5-08dce403a809 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:53.212615766 +0000 UTC m=+1465.488587287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-h4nj4" (UniqueName: "kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4") pod "keystone-fd05-account-create-update-8x6l5" (UID: "c89d22ea-a410-48a7-9af5-08dce403a809") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.244869 4903 generic.go:334] "Generic (PLEG): container finished" podID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerID="5294340766b49118b122c18adf127768d2b7a2248eea8752adcf1bf834f406c1" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.245850 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd","Type":"ContainerDied","Data":"5294340766b49118b122c18adf127768d2b7a2248eea8752adcf1bf834f406c1"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.272005 4903 generic.go:334] "Generic (PLEG): container finished" podID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerID="245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.272038 4903 generic.go:334] "Generic (PLEG): container finished" podID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerID="f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978" exitCode=2 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.272045 4903 generic.go:334] "Generic (PLEG): container finished" podID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerID="4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.272093 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerDied","Data":"245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.272125 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerDied","Data":"f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.272137 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerDied","Data":"4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.284443 4903 generic.go:334] "Generic (PLEG): container finished" podID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerID="84cee160ceac6a4ece1e643340f1aeca0d04bc37f045a38f6f21bb0a47361679" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.284744 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9","Type":"ContainerDied","Data":"84cee160ceac6a4ece1e643340f1aeca0d04bc37f045a38f6f21bb0a47361679"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.309895 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.312935 4903 generic.go:334] "Generic (PLEG): container finished" podID="d3c39267-5b08-4783-b267-7ee6395020f2" containerID="632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.312983 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3c39267-5b08-4783-b267-7ee6395020f2","Type":"ContainerDied","Data":"632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.344181 4903 generic.go:334] "Generic (PLEG): container finished" podID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerID="b0fb34b235f11adc68d9beed30603f223ccc79ee9902295559769c17c5aa973b" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.344280 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c3ca866-aac2-4b4f-ac25-71e741d9db2f","Type":"ContainerDied","Data":"b0fb34b235f11adc68d9beed30603f223ccc79ee9902295559769c17c5aa973b"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.345001 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.350892 4903 generic.go:334] "Generic (PLEG): container finished" podID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerID="5c7ed7cd33e049e46f8040cb018864248e1ee41e536bd85ada33bb819a70ed86" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.350973 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f4f5f43-7fbc-41d1-935d-b0844db162a7","Type":"ContainerDied","Data":"5c7ed7cd33e049e46f8040cb018864248e1ee41e536bd85ada33bb819a70ed86"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.380441 4903 generic.go:334] "Generic (PLEG): container finished" podID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerID="4ab5c17cdbc07a22bc6e3f55c4de9ca0284d8300cd938b4df77da1ec21f7ea19" exitCode=0 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.380572 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d7544958-xm4mt" event={"ID":"438d1db6-7b20-4f31-8a43-aa8f0c972501","Type":"ContainerDied","Data":"4ab5c17cdbc07a22bc6e3f55c4de9ca0284d8300cd938b4df77da1ec21f7ea19"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.388990 4903 generic.go:334] "Generic (PLEG): container finished" podID="66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" containerID="02154f1f0b54e0cfa5dae5ad4eb9c57e22b0da30380c0810c189562bfe3ae25b" exitCode=2 Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.389083 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba","Type":"ContainerDied","Data":"02154f1f0b54e0cfa5dae5ad4eb9c57e22b0da30380c0810c189562bfe3ae25b"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.414760 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-combined-ca-bundle\") pod \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.414852 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvtvt\" (UniqueName: \"kubernetes.io/projected/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-kube-api-access-zvtvt\") pod \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.414901 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-config-data\") pod \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\" (UID: \"2d08ed75-05f7-4c45-bc6e-0562a7bbb936\") " Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.422721 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.423095 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374 is running failed: container process not found" containerID="45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.423218 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-868d5455d4-797gw" Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.424710 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374 is running failed: container process not found" containerID="45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.428177 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374 is running failed: container process not found" containerID="45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 16:09:52 crc kubenswrapper[4903]: E0128 16:09:52.428275 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9ef215ce-85eb-4148-848a-aeb5a15e343e" containerName="nova-scheduler-scheduler" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.435949 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-kube-api-access-zvtvt" (OuterVolumeSpecName: "kube-api-access-zvtvt") pod "2d08ed75-05f7-4c45-bc6e-0562a7bbb936" (UID: "2d08ed75-05f7-4c45-bc6e-0562a7bbb936"). InnerVolumeSpecName "kube-api-access-zvtvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.437700 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="033b894a-46ce-4bd8-b97c-312c8b7c90dd" path="/var/lib/kubelet/pods/033b894a-46ce-4bd8-b97c-312c8b7c90dd/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.438795 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee28286-9cd6-4014-b388-a41d22c5e413" path="/var/lib/kubelet/pods/0ee28286-9cd6-4014-b388-a41d22c5e413/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.443844 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1423eabe-b2af-4a42-a38e-d5c1c53e7845" path="/var/lib/kubelet/pods/1423eabe-b2af-4a42-a38e-d5c1c53e7845/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.445222 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c586e0-c175-4b87-9464-b44649a8eb10" path="/var/lib/kubelet/pods/57c586e0-c175-4b87-9464-b44649a8eb10/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.446431 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f309ffd-6cba-4804-b3d5-114c4cad07bc" path="/var/lib/kubelet/pods/7f309ffd-6cba-4804-b3d5-114c4cad07bc/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.447885 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f" path="/var/lib/kubelet/pods/9400ccd1-efb2-4c92-b0e5-d5c221dfcb6f/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.448572 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab78c773-5297-4a98-8c9a-c80dbc6baf09" path="/var/lib/kubelet/pods/ab78c773-5297-4a98-8c9a-c80dbc6baf09/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.450669 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" path="/var/lib/kubelet/pods/bf32204d-973f-4397-8fbe-8b155f1f6f52/volumes" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.455883 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d08ed75-05f7-4c45-bc6e-0562a7bbb936" (UID: "2d08ed75-05f7-4c45-bc6e-0562a7bbb936"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.466952 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-config-data" (OuterVolumeSpecName: "config-data") pod "2d08ed75-05f7-4c45-bc6e-0562a7bbb936" (UID: "2d08ed75-05f7-4c45-bc6e-0562a7bbb936"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.518434 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.518805 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvtvt\" (UniqueName: \"kubernetes.io/projected/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-kube-api-access-zvtvt\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.518816 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d08ed75-05f7-4c45-bc6e-0562a7bbb936-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.557664 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-868d5455d4-797gw" event={"ID":"d91d56c5-1ada-417a-8a87-dc4e3960a186","Type":"ContainerDied","Data":"515dc11617073c0c30c93ab9c6e7836446b746f7e723fa0f2ccd8ff82d8c8a57"} Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.557721 4903 scope.go:117] "RemoveContainer" containerID="02a42f37dbf91bc71d23efe4fb6af018b9e853e3b220c2f03760e372b14d5184" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.600591 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-868d5455d4-797gw"] Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.606395 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-868d5455d4-797gw"] Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.613349 4903 scope.go:117] "RemoveContainer" containerID="5dd7a851cd619c29827b0ea6cd215ddd77b2818c97ba5045d1ae347a56fe5ca2" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.616963 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.617380 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:09:52 crc kubenswrapper[4903]: I0128 16:09:52.630066 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.698415 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.702558 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.721146 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.727472 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-logs\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.727614 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-internal-tls-certs\") pod \"438d1db6-7b20-4f31-8a43-aa8f0c972501\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.727638 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2q9\" (UniqueName: \"kubernetes.io/projected/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-api-access-rj2q9\") pod \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728086 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-config-data\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728160 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-config-data\") pod \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728186 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-certs\") pod \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728224 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data-custom\") pod \"438d1db6-7b20-4f31-8a43-aa8f0c972501\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728244 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-logs" (OuterVolumeSpecName: "logs") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728261 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdwbz\" (UniqueName: \"kubernetes.io/projected/438d1db6-7b20-4f31-8a43-aa8f0c972501-kube-api-access-pdwbz\") pod \"438d1db6-7b20-4f31-8a43-aa8f0c972501\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728328 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6b6s\" (UniqueName: \"kubernetes.io/projected/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-kube-api-access-l6b6s\") pod \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728355 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-combined-ca-bundle\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728380 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-internal-tls-certs\") pod \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728410 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-combined-ca-bundle\") pod \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728455 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-internal-tls-certs\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728478 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-scripts\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.728501 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data\") pod \"438d1db6-7b20-4f31-8a43-aa8f0c972501\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.729772 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-logs\") pod \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.729819 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438d1db6-7b20-4f31-8a43-aa8f0c972501-logs\") pod \"438d1db6-7b20-4f31-8a43-aa8f0c972501\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.729849 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.729879 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-httpd-run\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.729921 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-combined-ca-bundle\") pod \"438d1db6-7b20-4f31-8a43-aa8f0c972501\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.729949 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-config\") pod \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\" (UID: \"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.730011 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-public-tls-certs\") pod \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.730042 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-public-tls-certs\") pod \"438d1db6-7b20-4f31-8a43-aa8f0c972501\" (UID: \"438d1db6-7b20-4f31-8a43-aa8f0c972501\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.730073 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpc8k\" (UniqueName: \"kubernetes.io/projected/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-kube-api-access-zpc8k\") pod \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\" (UID: \"5c3ca866-aac2-4b4f-ac25-71e741d9db2f\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.730094 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-combined-ca-bundle\") pod \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\" (UID: \"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.730674 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.741710 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.758851 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-api-access-rj2q9" (OuterVolumeSpecName: "kube-api-access-rj2q9") pod "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" (UID: "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba"). InnerVolumeSpecName "kube-api-access-rj2q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.762702 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438d1db6-7b20-4f31-8a43-aa8f0c972501-kube-api-access-pdwbz" (OuterVolumeSpecName: "kube-api-access-pdwbz") pod "438d1db6-7b20-4f31-8a43-aa8f0c972501" (UID: "438d1db6-7b20-4f31-8a43-aa8f0c972501"). InnerVolumeSpecName "kube-api-access-pdwbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.762889 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.768892 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438d1db6-7b20-4f31-8a43-aa8f0c972501-logs" (OuterVolumeSpecName: "logs") pod "438d1db6-7b20-4f31-8a43-aa8f0c972501" (UID: "438d1db6-7b20-4f31-8a43-aa8f0c972501"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.770847 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-logs" (OuterVolumeSpecName: "logs") pod "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" (UID: "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.778699 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-scripts" (OuterVolumeSpecName: "scripts") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.778699 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "438d1db6-7b20-4f31-8a43-aa8f0c972501" (UID: "438d1db6-7b20-4f31-8a43-aa8f0c972501"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.811110 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-kube-api-access-l6b6s" (OuterVolumeSpecName: "kube-api-access-l6b6s") pod "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" (UID: "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9"). InnerVolumeSpecName "kube-api-access-l6b6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.813291 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.815216 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-kube-api-access-zpc8k" (OuterVolumeSpecName: "kube-api-access-zpc8k") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "kube-api-access-zpc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.833842 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4f5f43-7fbc-41d1-935d-b0844db162a7-logs\") pod \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834017 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-combined-ca-bundle\") pod \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834355 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-config-data\") pod \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834393 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7524\" (UniqueName: \"kubernetes.io/projected/7f4f5f43-7fbc-41d1-935d-b0844db162a7-kube-api-access-b7524\") pod \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834421 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-nova-metadata-tls-certs\") pod \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\" (UID: \"7f4f5f43-7fbc-41d1-935d-b0844db162a7\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834817 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834835 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdwbz\" (UniqueName: \"kubernetes.io/projected/438d1db6-7b20-4f31-8a43-aa8f0c972501-kube-api-access-pdwbz\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834851 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6b6s\" (UniqueName: \"kubernetes.io/projected/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-kube-api-access-l6b6s\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834862 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834871 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834882 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438d1db6-7b20-4f31-8a43-aa8f0c972501-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834904 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834915 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834927 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpc8k\" (UniqueName: \"kubernetes.io/projected/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-kube-api-access-zpc8k\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.834938 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2q9\" (UniqueName: \"kubernetes.io/projected/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-api-access-rj2q9\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.838231 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f4f5f43-7fbc-41d1-935d-b0844db162a7-logs" (OuterVolumeSpecName: "logs") pod "7f4f5f43-7fbc-41d1-935d-b0844db162a7" (UID: "7f4f5f43-7fbc-41d1-935d-b0844db162a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.881435 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.900964 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4f5f43-7fbc-41d1-935d-b0844db162a7-kube-api-access-b7524" (OuterVolumeSpecName: "kube-api-access-b7524") pod "7f4f5f43-7fbc-41d1-935d-b0844db162a7" (UID: "7f4f5f43-7fbc-41d1-935d-b0844db162a7"). InnerVolumeSpecName "kube-api-access-b7524". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.932772 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-config-data" (OuterVolumeSpecName: "config-data") pod "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" (UID: "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.940972 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f4f5f43-7fbc-41d1-935d-b0844db162a7-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.941008 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.941019 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:52.941031 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7524\" (UniqueName: \"kubernetes.io/projected/7f4f5f43-7fbc-41d1-935d-b0844db162a7-kube-api-access-b7524\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.011697 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" (UID: "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.037461 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.199:3000/\": dial tcp 10.217.0.199:3000: connect: connection refused" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.044683 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.156503 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.166070 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.196855 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" (UID: "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.240913 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" (UID: "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.282427 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nj4\" (UniqueName: \"kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.282470 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts\") pod \"keystone-fd05-account-create-update-8x6l5\" (UID: \"c89d22ea-a410-48a7-9af5-08dce403a809\") " pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.282669 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.282681 4903 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.282738 4903 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.282786 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts podName:c89d22ea-a410-48a7-9af5-08dce403a809 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:55.282771749 +0000 UTC m=+1467.558743250 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts") pod "keystone-fd05-account-create-update-8x6l5" (UID: "c89d22ea-a410-48a7-9af5-08dce403a809") : configmap "openstack-scripts" not found Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.298177 4903 projected.go:194] Error preparing data for projected volume kube-api-access-h4nj4 for pod openstack/keystone-fd05-account-create-update-8x6l5: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.298253 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4 podName:c89d22ea-a410-48a7-9af5-08dce403a809 nodeName:}" failed. No retries permitted until 2026-01-28 16:09:55.298231141 +0000 UTC m=+1467.574202662 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-h4nj4" (UniqueName: "kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4") pod "keystone-fd05-account-create-update-8x6l5" (UID: "c89d22ea-a410-48a7-9af5-08dce403a809") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.335281 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "438d1db6-7b20-4f31-8a43-aa8f0c972501" (UID: "438d1db6-7b20-4f31-8a43-aa8f0c972501"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.338280 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "438d1db6-7b20-4f31-8a43-aa8f0c972501" (UID: "438d1db6-7b20-4f31-8a43-aa8f0c972501"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.344099 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f4f5f43-7fbc-41d1-935d-b0844db162a7" (UID: "7f4f5f43-7fbc-41d1-935d-b0844db162a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.364703 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" (UID: "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.372648 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.383829 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.383851 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.383860 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.383868 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.383913 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.388686 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data" (OuterVolumeSpecName: "config-data") pod "438d1db6-7b20-4f31-8a43-aa8f0c972501" (UID: "438d1db6-7b20-4f31-8a43-aa8f0c972501"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.393298 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-config-data" (OuterVolumeSpecName: "config-data") pod "5c3ca866-aac2-4b4f-ac25-71e741d9db2f" (UID: "5c3ca866-aac2-4b4f-ac25-71e741d9db2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.412011 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "438d1db6-7b20-4f31-8a43-aa8f0c972501" (UID: "438d1db6-7b20-4f31-8a43-aa8f0c972501"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.417869 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" (UID: "59f1f4e5-22a4-420b-b6f2-8f936c5c39c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.434039 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.437453 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2d08ed75-05f7-4c45-bc6e-0562a7bbb936","Type":"ContainerDied","Data":"9096665546ada278b46fb5196597e34bca4dd34ea029157595af40b9d81b6f0a"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.437503 4903 scope.go:117] "RemoveContainer" containerID="d17dde91cb0a1a6f98fd83f4fc95a8b7937ef48ffffe0fb1238784914e759a6e" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.440212 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c3ca866-aac2-4b4f-ac25-71e741d9db2f","Type":"ContainerDied","Data":"26e3bda1ae259924517b42ce507802f2aee0acb2100f04be9a88c6da9afbc546"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.440233 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.453576 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79d7544958-xm4mt" event={"ID":"438d1db6-7b20-4f31-8a43-aa8f0c972501","Type":"ContainerDied","Data":"5468460068ba7936c0546ff6b356daa0181d7982dd39ef19f47094b5b655b9e4"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.453678 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79d7544958-xm4mt" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.464765 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" (UID: "66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.466152 4903 generic.go:334] "Generic (PLEG): container finished" podID="9ef215ce-85eb-4148-848a-aeb5a15e343e" containerID="45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374" exitCode=0 Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.466245 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ef215ce-85eb-4148-848a-aeb5a15e343e","Type":"ContainerDied","Data":"45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.476874 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"59f1f4e5-22a4-420b-b6f2-8f936c5c39c9","Type":"ContainerDied","Data":"58bf1c2569224c6c45eb3bce804aee0facc1bc81dc3da87edb6c105a6885bda9"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.477015 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.485382 4903 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.485402 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.485411 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.485420 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/438d1db6-7b20-4f31-8a43-aa8f0c972501-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.485430 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c3ca866-aac2-4b4f-ac25-71e741d9db2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.487561 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7f4f5f43-7fbc-41d1-935d-b0844db162a7" (UID: "7f4f5f43-7fbc-41d1-935d-b0844db162a7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.488051 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3c39267-5b08-4783-b267-7ee6395020f2","Type":"ContainerDied","Data":"fc097a85e4b1cf5329f5d0e557314ca12ba9c6a72baee322bebac95f9e836bda"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.488075 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc097a85e4b1cf5329f5d0e557314ca12ba9c6a72baee322bebac95f9e836bda" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.502477 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7f4f5f43-7fbc-41d1-935d-b0844db162a7","Type":"ContainerDied","Data":"b5893882ab8ee781886f5597553395a5570d33496ac7f8e5b32fa3f9a98f7db9"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.502622 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.518840 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba","Type":"ContainerDied","Data":"4a33692bb65b9d9eb87a3fae88a43f812fd25ff67ac886ded3b66b0a56dc0076"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.518937 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.519668 4903 scope.go:117] "RemoveContainer" containerID="b0fb34b235f11adc68d9beed30603f223ccc79ee9902295559769c17c5aa973b" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.520467 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.521754 4903 generic.go:334] "Generic (PLEG): container finished" podID="bac3a1bb-718a-42b1-9c87-71258a05b083" containerID="22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7" exitCode=0 Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.521815 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fd05-account-create-update-8x6l5" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.525699 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bac3a1bb-718a-42b1-9c87-71258a05b083","Type":"ContainerDied","Data":"22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.525738 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bac3a1bb-718a-42b1-9c87-71258a05b083","Type":"ContainerDied","Data":"d8b02200375a1f021216a2ca1dbb1b01ec854046bbe9d9b112d4f98d4c7a9d0b"} Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.526194 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-config-data" (OuterVolumeSpecName: "config-data") pod "7f4f5f43-7fbc-41d1-935d-b0844db162a7" (UID: "7f4f5f43-7fbc-41d1-935d-b0844db162a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.527333 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.549549 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.564972 4903 scope.go:117] "RemoveContainer" containerID="c34ec1bdca9dcf388b45d4df31616bfc2ee16b7a70a6f94f04662492238c5d30" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.582588 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.586471 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtvrg\" (UniqueName: \"kubernetes.io/projected/bac3a1bb-718a-42b1-9c87-71258a05b083-kube-api-access-jtvrg\") pod \"bac3a1bb-718a-42b1-9c87-71258a05b083\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.586566 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-config-data\") pod \"bac3a1bb-718a-42b1-9c87-71258a05b083\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.586644 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-kolla-config\") pod \"bac3a1bb-718a-42b1-9c87-71258a05b083\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.586674 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-combined-ca-bundle\") pod \"bac3a1bb-718a-42b1-9c87-71258a05b083\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.586731 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-memcached-tls-certs\") pod \"bac3a1bb-718a-42b1-9c87-71258a05b083\" (UID: \"bac3a1bb-718a-42b1-9c87-71258a05b083\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.586785 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-combined-ca-bundle\") pod \"d3c39267-5b08-4783-b267-7ee6395020f2\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.588808 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7fjp\" (UniqueName: \"kubernetes.io/projected/d3c39267-5b08-4783-b267-7ee6395020f2-kube-api-access-k7fjp\") pod \"d3c39267-5b08-4783-b267-7ee6395020f2\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.588839 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-config-data\") pod \"d3c39267-5b08-4783-b267-7ee6395020f2\" (UID: \"d3c39267-5b08-4783-b267-7ee6395020f2\") " Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.591358 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-config-data" (OuterVolumeSpecName: "config-data") pod "bac3a1bb-718a-42b1-9c87-71258a05b083" (UID: "bac3a1bb-718a-42b1-9c87-71258a05b083"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.593185 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79d7544958-xm4mt"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.593572 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "bac3a1bb-718a-42b1-9c87-71258a05b083" (UID: "bac3a1bb-718a-42b1-9c87-71258a05b083"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.594900 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c39267-5b08-4783-b267-7ee6395020f2-kube-api-access-k7fjp" (OuterVolumeSpecName: "kube-api-access-k7fjp") pod "d3c39267-5b08-4783-b267-7ee6395020f2" (UID: "d3c39267-5b08-4783-b267-7ee6395020f2"). InnerVolumeSpecName "kube-api-access-k7fjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.595306 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.595352 4903 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f4f5f43-7fbc-41d1-935d-b0844db162a7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.595366 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac3a1bb-718a-42b1-9c87-71258a05b083-kube-api-access-jtvrg" (OuterVolumeSpecName: "kube-api-access-jtvrg") pod "bac3a1bb-718a-42b1-9c87-71258a05b083" (UID: "bac3a1bb-718a-42b1-9c87-71258a05b083"). InnerVolumeSpecName "kube-api-access-jtvrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.597148 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.597435 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data podName:cee6442c-f9ef-4902-b6ec-2bc01a904849 nodeName:}" failed. No retries permitted until 2026-01-28 16:10:01.597410652 +0000 UTC m=+1473.873382193 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data") pod "rabbitmq-cell1-server-0" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849") : configmap "rabbitmq-cell1-config-data" not found Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.602094 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-79d7544958-xm4mt"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.610075 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.635815 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.638756 4903 scope.go:117] "RemoveContainer" containerID="4ab5c17cdbc07a22bc6e3f55c4de9ca0284d8300cd938b4df77da1ec21f7ea19" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.643511 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3c39267-5b08-4783-b267-7ee6395020f2" (UID: "d3c39267-5b08-4783-b267-7ee6395020f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.649041 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "bac3a1bb-718a-42b1-9c87-71258a05b083" (UID: "bac3a1bb-718a-42b1-9c87-71258a05b083"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.655888 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bac3a1bb-718a-42b1-9c87-71258a05b083" (UID: "bac3a1bb-718a-42b1-9c87-71258a05b083"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.696780 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7fjp\" (UniqueName: \"kubernetes.io/projected/d3c39267-5b08-4783-b267-7ee6395020f2-kube-api-access-k7fjp\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.696811 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtvrg\" (UniqueName: \"kubernetes.io/projected/bac3a1bb-718a-42b1-9c87-71258a05b083-kube-api-access-jtvrg\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.696823 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.696836 4903 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bac3a1bb-718a-42b1-9c87-71258a05b083-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.696849 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.696861 4903 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bac3a1bb-718a-42b1-9c87-71258a05b083-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.696872 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.702779 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-config-data" (OuterVolumeSpecName: "config-data") pod "d3c39267-5b08-4783-b267-7ee6395020f2" (UID: "d3c39267-5b08-4783-b267-7ee6395020f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.704933 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.708256 4903 scope.go:117] "RemoveContainer" containerID="f5c9a79fdf1fdd76ebd49ee1d6512d0b2f33149f5da0dd564a2edc3e7102a0f1" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.723921 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.750283 4903 scope.go:117] "RemoveContainer" containerID="84cee160ceac6a4ece1e643340f1aeca0d04bc37f045a38f6f21bb0a47361679" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.756146 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fd05-account-create-update-8x6l5"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.769910 4903 scope.go:117] "RemoveContainer" containerID="415bb4f9abcba2194b819d557a32350c24234e674b16185bd86d9dd42b6d9a0b" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.776810 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fd05-account-create-update-8x6l5"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.792948 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.799441 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c39267-5b08-4783-b267-7ee6395020f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.799479 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c89d22ea-a410-48a7-9af5-08dce403a809-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.799494 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4nj4\" (UniqueName: \"kubernetes.io/projected/c89d22ea-a410-48a7-9af5-08dce403a809-kube-api-access-h4nj4\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.801578 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.811038 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.821981 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 16:09:53 crc kubenswrapper[4903]: E0128 16:09:53.825694 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerName="galera" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.828813 4903 scope.go:117] "RemoveContainer" containerID="5c7ed7cd33e049e46f8040cb018864248e1ee41e536bd85ada33bb819a70ed86" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.848463 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.866688 4903 scope.go:117] "RemoveContainer" containerID="80e37ef3a7839cc1c8d8d21208fac7637eb50268a7239fb7994a6925aeaeb7ef" Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.901305 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.919311 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 16:09:53 crc kubenswrapper[4903]: I0128 16:09:53.966764 4903 scope.go:117] "RemoveContainer" containerID="02154f1f0b54e0cfa5dae5ad4eb9c57e22b0da30380c0810c189562bfe3ae25b" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.044426 4903 scope.go:117] "RemoveContainer" containerID="22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.083958 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.098774 4903 scope.go:117] "RemoveContainer" containerID="22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7" Jan 28 16:09:54 crc kubenswrapper[4903]: E0128 16:09:54.099159 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7\": container with ID starting with 22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7 not found: ID does not exist" containerID="22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.099191 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7"} err="failed to get container status \"22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7\": rpc error: code = NotFound desc = could not find container \"22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7\": container with ID starting with 22097379b6bef1be9fa0c01a791bb95001ee24a097e4b4b2a67015b2cd8b4bc7 not found: ID does not exist" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.112262 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-combined-ca-bundle\") pod \"9ef215ce-85eb-4148-848a-aeb5a15e343e\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.112308 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvd7r\" (UniqueName: \"kubernetes.io/projected/9ef215ce-85eb-4148-848a-aeb5a15e343e-kube-api-access-kvd7r\") pod \"9ef215ce-85eb-4148-848a-aeb5a15e343e\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.112365 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-config-data\") pod \"9ef215ce-85eb-4148-848a-aeb5a15e343e\" (UID: \"9ef215ce-85eb-4148-848a-aeb5a15e343e\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.122576 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef215ce-85eb-4148-848a-aeb5a15e343e-kube-api-access-kvd7r" (OuterVolumeSpecName: "kube-api-access-kvd7r") pod "9ef215ce-85eb-4148-848a-aeb5a15e343e" (UID: "9ef215ce-85eb-4148-848a-aeb5a15e343e"). InnerVolumeSpecName "kube-api-access-kvd7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.165845 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ef215ce-85eb-4148-848a-aeb5a15e343e" (UID: "9ef215ce-85eb-4148-848a-aeb5a15e343e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.191976 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-config-data" (OuterVolumeSpecName: "config-data") pod "9ef215ce-85eb-4148-848a-aeb5a15e343e" (UID: "9ef215ce-85eb-4148-848a-aeb5a15e343e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.217377 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.217619 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvd7r\" (UniqueName: \"kubernetes.io/projected/9ef215ce-85eb-4148-848a-aeb5a15e343e-kube-api-access-kvd7r\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.217716 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef215ce-85eb-4148-848a-aeb5a15e343e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: E0128 16:09:54.217425 4903 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 16:09:54 crc kubenswrapper[4903]: E0128 16:09:54.217905 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data podName:bb51034c-4387-4aba-8eff-6ff960538da9 nodeName:}" failed. No retries permitted until 2026-01-28 16:10:02.217876487 +0000 UTC m=+1474.493847988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data") pod "rabbitmq-server-0" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9") : configmap "rabbitmq-config-data" not found Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.244265 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-588cq"] Jan 28 16:09:54 crc kubenswrapper[4903]: E0128 16:09:54.267787 4903 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 16:09:54 crc kubenswrapper[4903]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 28 16:09:54 crc kubenswrapper[4903]: Jan 28 16:09:54 crc kubenswrapper[4903]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 16:09:54 crc kubenswrapper[4903]: Jan 28 16:09:54 crc kubenswrapper[4903]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 16:09:54 crc kubenswrapper[4903]: Jan 28 16:09:54 crc kubenswrapper[4903]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 16:09:54 crc kubenswrapper[4903]: Jan 28 16:09:54 crc kubenswrapper[4903]: if [ -n "" ]; then Jan 28 16:09:54 crc kubenswrapper[4903]: GRANT_DATABASE="" Jan 28 16:09:54 crc kubenswrapper[4903]: else Jan 28 16:09:54 crc kubenswrapper[4903]: GRANT_DATABASE="*" Jan 28 16:09:54 crc kubenswrapper[4903]: fi Jan 28 16:09:54 crc kubenswrapper[4903]: Jan 28 16:09:54 crc kubenswrapper[4903]: # going for maximum compatibility here: Jan 28 16:09:54 crc kubenswrapper[4903]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 16:09:54 crc kubenswrapper[4903]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 16:09:54 crc kubenswrapper[4903]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 16:09:54 crc kubenswrapper[4903]: # support updates Jan 28 16:09:54 crc kubenswrapper[4903]: Jan 28 16:09:54 crc kubenswrapper[4903]: $MYSQL_CMD < logger="UnhandledError" Jan 28 16:09:54 crc kubenswrapper[4903]: E0128 16:09:54.269953 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-588cq" podUID="cad02107-7c85-434e-aeb2-ab7a9924743d" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.308570 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.422924 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.423384 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-public-tls-certs\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.423479 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-logs\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.423551 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-httpd-run\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.423627 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-combined-ca-bundle\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.423663 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5m4r\" (UniqueName: \"kubernetes.io/projected/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-kube-api-access-j5m4r\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.423692 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-config-data\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.423784 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-scripts\") pod \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\" (UID: \"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.424491 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.426475 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d08ed75-05f7-4c45-bc6e-0562a7bbb936" path="/var/lib/kubelet/pods/2d08ed75-05f7-4c45-bc6e-0562a7bbb936/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.426853 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-scripts" (OuterVolumeSpecName: "scripts") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.426967 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.427146 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-logs" (OuterVolumeSpecName: "logs") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.427421 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" path="/var/lib/kubelet/pods/438d1db6-7b20-4f31-8a43-aa8f0c972501/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.428053 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" path="/var/lib/kubelet/pods/59f1f4e5-22a4-420b-b6f2-8f936c5c39c9/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.430935 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" path="/var/lib/kubelet/pods/5c3ca866-aac2-4b4f-ac25-71e741d9db2f/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.432012 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-kube-api-access-j5m4r" (OuterVolumeSpecName: "kube-api-access-j5m4r") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "kube-api-access-j5m4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.432169 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" path="/var/lib/kubelet/pods/66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.432865 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" path="/var/lib/kubelet/pods/7f4f5f43-7fbc-41d1-935d-b0844db162a7/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.435797 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89d22ea-a410-48a7-9af5-08dce403a809" path="/var/lib/kubelet/pods/c89d22ea-a410-48a7-9af5-08dce403a809/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.436177 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" path="/var/lib/kubelet/pods/d91d56c5-1ada-417a-8a87-dc4e3960a186/volumes" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.490733 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.497630 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.505856 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-config-data" (OuterVolumeSpecName: "config-data") pod "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" (UID: "fb7483e7-0a5f-47dd-9f1a-baaed6822ffd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.506270 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531706 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5m4r\" (UniqueName: \"kubernetes.io/projected/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-kube-api-access-j5m4r\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531732 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531741 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531762 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531773 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531787 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531946 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.531995 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.542300 4903 generic.go:334] "Generic (PLEG): container finished" podID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerID="cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee" exitCode=0 Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.542358 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.542407 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cee6442c-f9ef-4902-b6ec-2bc01a904849","Type":"ContainerDied","Data":"cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee"} Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.542467 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cee6442c-f9ef-4902-b6ec-2bc01a904849","Type":"ContainerDied","Data":"9d9c5e642889ac0dd416fa9ad89a59a78b150882066355bc40b4a0a11b767a28"} Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.542490 4903 scope.go:117] "RemoveContainer" containerID="cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.548754 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.548909 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb7483e7-0a5f-47dd-9f1a-baaed6822ffd","Type":"ContainerDied","Data":"37f0decf149697fce841b3da8028a302c15a072726751463f73d11e364a82070"} Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.554206 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.555489 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-588cq" event={"ID":"cad02107-7c85-434e-aeb2-ab7a9924743d","Type":"ContainerStarted","Data":"a8d5a6b30536dbf386a97f676a6dbd78b3c379f8074ac80181e12372d12eee72"} Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.584513 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ef215ce-85eb-4148-848a-aeb5a15e343e","Type":"ContainerDied","Data":"bb56c9b6e1a6481e0abe288379fb3b1829e392b49dfa9d2d84959732310c1660"} Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.584634 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.591087 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.599401 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633356 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633421 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5xkr\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-kube-api-access-k5xkr\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633441 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-tls\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633552 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633636 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-plugins\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633675 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-erlang-cookie\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633713 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-plugins-conf\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633806 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-server-conf\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633838 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-confd\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633865 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cee6442c-f9ef-4902-b6ec-2bc01a904849-pod-info\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.633895 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cee6442c-f9ef-4902-b6ec-2bc01a904849-erlang-cookie-secret\") pod \"cee6442c-f9ef-4902-b6ec-2bc01a904849\" (UID: \"cee6442c-f9ef-4902-b6ec-2bc01a904849\") " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.634292 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.634720 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.634739 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.635227 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.639596 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.641544 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.642518 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-kube-api-access-k5xkr" (OuterVolumeSpecName: "kube-api-access-k5xkr") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "kube-api-access-k5xkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.643745 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee6442c-f9ef-4902-b6ec-2bc01a904849-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.653750 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cee6442c-f9ef-4902-b6ec-2bc01a904849-pod-info" (OuterVolumeSpecName: "pod-info") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.655403 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data" (OuterVolumeSpecName: "config-data") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.679221 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-server-conf" (OuterVolumeSpecName: "server-conf") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.702720 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.707865 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.717253 4903 scope.go:117] "RemoveContainer" containerID="03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.717589 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.725300 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.732550 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.735899 4903 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cee6442c-f9ef-4902-b6ec-2bc01a904849-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.735959 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.735973 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5xkr\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-kube-api-access-k5xkr\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.735984 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.735995 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.736005 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.736016 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.736026 4903 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.736039 4903 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cee6442c-f9ef-4902-b6ec-2bc01a904849-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.736048 4903 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cee6442c-f9ef-4902-b6ec-2bc01a904849-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.741862 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.748152 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.751483 4903 scope.go:117] "RemoveContainer" containerID="cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee" Jan 28 16:09:54 crc kubenswrapper[4903]: E0128 16:09:54.752713 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee\": container with ID starting with cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee not found: ID does not exist" containerID="cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.752754 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee"} err="failed to get container status \"cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee\": rpc error: code = NotFound desc = could not find container \"cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee\": container with ID starting with cd8d04a01025901b9075c3254bd01d3bedf41986c42dbe05f4bdc27473fb0dee not found: ID does not exist" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.752782 4903 scope.go:117] "RemoveContainer" containerID="03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10" Jan 28 16:09:54 crc kubenswrapper[4903]: E0128 16:09:54.753159 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10\": container with ID starting with 03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10 not found: ID does not exist" containerID="03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.753215 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10"} err="failed to get container status \"03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10\": rpc error: code = NotFound desc = could not find container \"03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10\": container with ID starting with 03d2a854078fc620afa46e135f5837075f3d687a66bbac56ac9497d950766f10 not found: ID does not exist" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.753249 4903 scope.go:117] "RemoveContainer" containerID="5294340766b49118b122c18adf127768d2b7a2248eea8752adcf1bf834f406c1" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.753603 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.755181 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.759988 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cee6442c-f9ef-4902-b6ec-2bc01a904849" (UID: "cee6442c-f9ef-4902-b6ec-2bc01a904849"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.780083 4903 scope.go:117] "RemoveContainer" containerID="09c605d6038ace2063cd36abb755adc5f02bf5408e796a180094c2237ab62208" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.807359 4903 scope.go:117] "RemoveContainer" containerID="45e71e3dba3217dbf197c1f894fb6a4d31fd20feafc4a2cda6f87172849b6374" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.837714 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cee6442c-f9ef-4902-b6ec-2bc01a904849-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.838168 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.939627 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 16:09:54 crc kubenswrapper[4903]: I0128 16:09:54.944394 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.071776 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-588cq" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.104926 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_62f6e7cc-c41e-47b0-8b46-6ec53e998cbe/ovn-northd/0.log" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.106457 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142608 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frjxk\" (UniqueName: \"kubernetes.io/projected/cad02107-7c85-434e-aeb2-ab7a9924743d-kube-api-access-frjxk\") pod \"cad02107-7c85-434e-aeb2-ab7a9924743d\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142696 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-config\") pod \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142760 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad02107-7c85-434e-aeb2-ab7a9924743d-operator-scripts\") pod \"cad02107-7c85-434e-aeb2-ab7a9924743d\" (UID: \"cad02107-7c85-434e-aeb2-ab7a9924743d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142827 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-rundir\") pod \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142887 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-metrics-certs-tls-certs\") pod \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142918 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-scripts\") pod \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142938 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-combined-ca-bundle\") pod \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.142981 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtfd8\" (UniqueName: \"kubernetes.io/projected/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-kube-api-access-wtfd8\") pod \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.143002 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-northd-tls-certs\") pod \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\" (UID: \"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.144247 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" (UID: "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.145293 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-config" (OuterVolumeSpecName: "config") pod "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" (UID: "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.145687 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-scripts" (OuterVolumeSpecName: "scripts") pod "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" (UID: "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.148184 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad02107-7c85-434e-aeb2-ab7a9924743d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cad02107-7c85-434e-aeb2-ab7a9924743d" (UID: "cad02107-7c85-434e-aeb2-ab7a9924743d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.161951 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad02107-7c85-434e-aeb2-ab7a9924743d-kube-api-access-frjxk" (OuterVolumeSpecName: "kube-api-access-frjxk") pod "cad02107-7c85-434e-aeb2-ab7a9924743d" (UID: "cad02107-7c85-434e-aeb2-ab7a9924743d"). InnerVolumeSpecName "kube-api-access-frjxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.202082 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-kube-api-access-wtfd8" (OuterVolumeSpecName: "kube-api-access-wtfd8") pod "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" (UID: "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe"). InnerVolumeSpecName "kube-api-access-wtfd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.228439 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" (UID: "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.229917 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" (UID: "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245166 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frjxk\" (UniqueName: \"kubernetes.io/projected/cad02107-7c85-434e-aeb2-ab7a9924743d-kube-api-access-frjxk\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245215 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245230 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cad02107-7c85-434e-aeb2-ab7a9924743d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245242 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245254 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245264 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245275 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtfd8\" (UniqueName: \"kubernetes.io/projected/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-kube-api-access-wtfd8\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.245286 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.267714 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" (UID: "62f6e7cc-c41e-47b0-8b46-6ec53e998cbe"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.329409 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.351981 4903 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.452615 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-combined-ca-bundle\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.452703 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-credential-keys\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.452742 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d94jq\" (UniqueName: \"kubernetes.io/projected/1f6d6643-926c-4d0d-8986-a7c56e748e3f-kube-api-access-d94jq\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.452819 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-scripts\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.452874 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-config-data\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.452935 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-internal-tls-certs\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.453001 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-fernet-keys\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.453022 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-public-tls-certs\") pod \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\" (UID: \"1f6d6643-926c-4d0d-8986-a7c56e748e3f\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.457086 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.459188 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6d6643-926c-4d0d-8986-a7c56e748e3f-kube-api-access-d94jq" (OuterVolumeSpecName: "kube-api-access-d94jq") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "kube-api-access-d94jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.463951 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.475257 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-scripts" (OuterVolumeSpecName: "scripts") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.484012 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-config-data" (OuterVolumeSpecName: "config-data") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.491386 4903 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 28 16:09:55 crc kubenswrapper[4903]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-28T16:09:48Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 16:09:55 crc kubenswrapper[4903]: /etc/init.d/functions: line 589: 435 Alarm clock "$@" Jan 28 16:09:55 crc kubenswrapper[4903]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-g8tcr" message=< Jan 28 16:09:55 crc kubenswrapper[4903]: Exiting ovn-controller (1) [FAILED] Jan 28 16:09:55 crc kubenswrapper[4903]: Killing ovn-controller (1) [ OK ] Jan 28 16:09:55 crc kubenswrapper[4903]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 28 16:09:55 crc kubenswrapper[4903]: 2026-01-28T16:09:48Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 16:09:55 crc kubenswrapper[4903]: /etc/init.d/functions: line 589: 435 Alarm clock "$@" Jan 28 16:09:55 crc kubenswrapper[4903]: > Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.491435 4903 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 28 16:09:55 crc kubenswrapper[4903]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-28T16:09:48Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 16:09:55 crc kubenswrapper[4903]: /etc/init.d/functions: line 589: 435 Alarm clock "$@" Jan 28 16:09:55 crc kubenswrapper[4903]: > pod="openstack/ovn-controller-g8tcr" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerName="ovn-controller" containerID="cri-o://d788fb6f80b15b1916c1e431397434ddb83e22295a82de80156a3e89366081b1" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.491486 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-g8tcr" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerName="ovn-controller" containerID="cri-o://d788fb6f80b15b1916c1e431397434ddb83e22295a82de80156a3e89366081b1" gracePeriod=22 Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.496081 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.523105 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.537065 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f6d6643-926c-4d0d-8986-a7c56e748e3f" (UID: "1f6d6643-926c-4d0d-8986-a7c56e748e3f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555034 4903 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555067 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555076 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555085 4903 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555094 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d94jq\" (UniqueName: \"kubernetes.io/projected/1f6d6643-926c-4d0d-8986-a7c56e748e3f-kube-api-access-d94jq\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555103 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555110 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.555120 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6d6643-926c-4d0d-8986-a7c56e748e3f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.577097 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.590051 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.621591 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_62f6e7cc-c41e-47b0-8b46-6ec53e998cbe/ovn-northd/0.log" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.621643 4903 generic.go:334] "Generic (PLEG): container finished" podID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerID="a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" exitCode=139 Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.621747 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.621842 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe","Type":"ContainerDied","Data":"a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.621919 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f6e7cc-c41e-47b0-8b46-6ec53e998cbe","Type":"ContainerDied","Data":"33013a2f4fd6712d7e2d09406c9ee9ba685fde8f46cb3dc4be4414af99408a01"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.621946 4903 scope.go:117] "RemoveContainer" containerID="858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.653716 4903 generic.go:334] "Generic (PLEG): container finished" podID="bb51034c-4387-4aba-8eff-6ff960538da9" containerID="3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef" exitCode=0 Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.653798 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb51034c-4387-4aba-8eff-6ff960538da9","Type":"ContainerDied","Data":"3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.653824 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bb51034c-4387-4aba-8eff-6ff960538da9","Type":"ContainerDied","Data":"2a645a2906ccbba2a909cee4ad281eb556682d4461f3589c473b077e0bbb5072"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.653894 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656277 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-plugins-conf\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656323 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-confd\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656349 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-generated\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656411 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-operator-scripts\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656468 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-galera-tls-certs\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656503 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb51034c-4387-4aba-8eff-6ff960538da9-pod-info\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656567 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-plugins\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656596 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb51034c-4387-4aba-8eff-6ff960538da9-erlang-cookie-secret\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656618 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-default\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656653 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-erlang-cookie\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656687 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-server-conf\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656712 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-tls\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656740 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvtbj\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-kube-api-access-bvtbj\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656777 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kolla-config\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656808 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656832 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656861 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r7rn\" (UniqueName: \"kubernetes.io/projected/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kube-api-access-4r7rn\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656894 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data\") pod \"bb51034c-4387-4aba-8eff-6ff960538da9\" (UID: \"bb51034c-4387-4aba-8eff-6ff960538da9\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.656919 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-combined-ca-bundle\") pod \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\" (UID: \"9d45d584-dc21-48a4-842d-ab47fcfdd63d\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.658645 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.666177 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.666300 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.666473 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.667008 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.669754 4903 generic.go:334] "Generic (PLEG): container finished" podID="1f6d6643-926c-4d0d-8986-a7c56e748e3f" containerID="468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4" exitCode=0 Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.669870 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55866f486f-t9ft2" event={"ID":"1f6d6643-926c-4d0d-8986-a7c56e748e3f","Type":"ContainerDied","Data":"468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.669908 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-55866f486f-t9ft2" event={"ID":"1f6d6643-926c-4d0d-8986-a7c56e748e3f","Type":"ContainerDied","Data":"cd71da642a5c21e3b45fcf93be3685bb0d8fe5759453adf3438a5efc81be2db5"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.670003 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-55866f486f-t9ft2" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.672148 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-kube-api-access-bvtbj" (OuterVolumeSpecName: "kube-api-access-bvtbj") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "kube-api-access-bvtbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.673360 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.673755 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.675017 4903 scope.go:117] "RemoveContainer" containerID="a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.676357 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb51034c-4387-4aba-8eff-6ff960538da9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.677155 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kube-api-access-4r7rn" (OuterVolumeSpecName: "kube-api-access-4r7rn") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "kube-api-access-4r7rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.678095 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g8tcr_33a30cd9-7e56-4a30-8b2d-7786c742c248/ovn-controller/0.log" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.678150 4903 generic.go:334] "Generic (PLEG): container finished" podID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerID="d788fb6f80b15b1916c1e431397434ddb83e22295a82de80156a3e89366081b1" exitCode=137 Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.678269 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g8tcr" event={"ID":"33a30cd9-7e56-4a30-8b2d-7786c742c248","Type":"ContainerDied","Data":"d788fb6f80b15b1916c1e431397434ddb83e22295a82de80156a3e89366081b1"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.680750 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.681150 4903 generic.go:334] "Generic (PLEG): container finished" podID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerID="0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" exitCode=0 Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.681279 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9d45d584-dc21-48a4-842d-ab47fcfdd63d","Type":"ContainerDied","Data":"0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.681311 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9d45d584-dc21-48a4-842d-ab47fcfdd63d","Type":"ContainerDied","Data":"563f678091ae0bfdd59f87ef2dda599d56aa391658b04e5c0448e51b282c611f"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.681413 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.684372 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-588cq" event={"ID":"cad02107-7c85-434e-aeb2-ab7a9924743d","Type":"ContainerDied","Data":"a8d5a6b30536dbf386a97f676a6dbd78b3c379f8074ac80181e12372d12eee72"} Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.684483 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-588cq" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.690075 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.690505 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bb51034c-4387-4aba-8eff-6ff960538da9-pod-info" (OuterVolumeSpecName: "pod-info") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.700981 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.711719 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.714111 4903 scope.go:117] "RemoveContainer" containerID="858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.714498 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8\": container with ID starting with 858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8 not found: ID does not exist" containerID="858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.714551 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8"} err="failed to get container status \"858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8\": rpc error: code = NotFound desc = could not find container \"858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8\": container with ID starting with 858350e0cd79b7884935ad7e32ea4b683a1737993e07c163137971c66904ccf8 not found: ID does not exist" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.714571 4903 scope.go:117] "RemoveContainer" containerID="a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.714793 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9\": container with ID starting with a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9 not found: ID does not exist" containerID="a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.714815 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9"} err="failed to get container status \"a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9\": rpc error: code = NotFound desc = could not find container \"a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9\": container with ID starting with a5c88221cd30fdbf082c42c6b698e0e46ab63ad0aeb6f4ec922f13a1c1c0a7a9 not found: ID does not exist" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.714827 4903 scope.go:117] "RemoveContainer" containerID="3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.721492 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data" (OuterVolumeSpecName: "config-data") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.721241 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.738752 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.779982 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": dial tcp 10.217.0.166:8080: i/o timeout" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.780034 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-867d8c4cc5-vz4lw" podUID="bf32204d-973f-4397-8fbe-8b155f1f6f52" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": dial tcp 10.217.0.166:8080: i/o timeout" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.780989 4903 scope.go:117] "RemoveContainer" containerID="35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.783970 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9d45d584-dc21-48a4-842d-ab47fcfdd63d" (UID: "9d45d584-dc21-48a4-842d-ab47fcfdd63d"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785404 4903 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785434 4903 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bb51034c-4387-4aba-8eff-6ff960538da9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785447 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785457 4903 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bb51034c-4387-4aba-8eff-6ff960538da9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785471 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785481 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785492 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785503 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvtbj\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-kube-api-access-bvtbj\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785513 4903 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785596 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785614 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785626 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r7rn\" (UniqueName: \"kubernetes.io/projected/9d45d584-dc21-48a4-842d-ab47fcfdd63d-kube-api-access-4r7rn\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785638 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785649 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d45d584-dc21-48a4-842d-ab47fcfdd63d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785659 4903 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785671 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d45d584-dc21-48a4-842d-ab47fcfdd63d-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.785681 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d45d584-dc21-48a4-842d-ab47fcfdd63d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.791836 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-server-conf" (OuterVolumeSpecName: "server-conf") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.800905 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-55866f486f-t9ft2"] Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.812593 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-55866f486f-t9ft2"] Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.818714 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.819858 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.820702 4903 scope.go:117] "RemoveContainer" containerID="3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.824685 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef\": container with ID starting with 3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef not found: ID does not exist" containerID="3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.824737 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef"} err="failed to get container status \"3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef\": rpc error: code = NotFound desc = could not find container \"3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef\": container with ID starting with 3297f673c7a4c1ff44bd545ce6f7c1f80aa06d530e08222c84369eec190cc7ef not found: ID does not exist" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.824774 4903 scope.go:117] "RemoveContainer" containerID="35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.825153 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2\": container with ID starting with 35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2 not found: ID does not exist" containerID="35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.825215 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2"} err="failed to get container status \"35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2\": rpc error: code = NotFound desc = could not find container \"35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2\": container with ID starting with 35f8f6f747a1efb4d646cb621565f8dc9a7b37c387930df6bd0b8acdc311a1a2 not found: ID does not exist" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.825242 4903 scope.go:117] "RemoveContainer" containerID="468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.827945 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-588cq"] Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.834680 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-588cq"] Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.837468 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bb51034c-4387-4aba-8eff-6ff960538da9" (UID: "bb51034c-4387-4aba-8eff-6ff960538da9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.838852 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g8tcr_33a30cd9-7e56-4a30-8b2d-7786c742c248/ovn-controller/0.log" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.838964 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.855061 4903 scope.go:117] "RemoveContainer" containerID="468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.855998 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4\": container with ID starting with 468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4 not found: ID does not exist" containerID="468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.856095 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4"} err="failed to get container status \"468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4\": rpc error: code = NotFound desc = could not find container \"468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4\": container with ID starting with 468dc02e975c43ba29092b959838f21ad8ad46918edcd52bf78ca34a1d179aa4 not found: ID does not exist" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.856195 4903 scope.go:117] "RemoveContainer" containerID="0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.886704 4903 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bb51034c-4387-4aba-8eff-6ff960538da9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.886745 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.886758 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.886770 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bb51034c-4387-4aba-8eff-6ff960538da9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.900946 4903 scope.go:117] "RemoveContainer" containerID="b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.931887 4903 scope.go:117] "RemoveContainer" containerID="0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.932438 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9\": container with ID starting with 0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9 not found: ID does not exist" containerID="0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.932481 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9"} err="failed to get container status \"0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9\": rpc error: code = NotFound desc = could not find container \"0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9\": container with ID starting with 0f0fcc1b4da22cc981ad22532160e18a73bd18c517960706ec9f7ec0175912b9 not found: ID does not exist" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.932507 4903 scope.go:117] "RemoveContainer" containerID="b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b" Jan 28 16:09:55 crc kubenswrapper[4903]: E0128 16:09:55.932932 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b\": container with ID starting with b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b not found: ID does not exist" containerID="b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.932974 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b"} err="failed to get container status \"b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b\": rpc error: code = NotFound desc = could not find container \"b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b\": container with ID starting with b59ff4f79891348fa36bf228904dbaca74117342511e8ae0573781ed946dd39b not found: ID does not exist" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987106 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run\") pod \"33a30cd9-7e56-4a30-8b2d-7786c742c248\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987205 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a30cd9-7e56-4a30-8b2d-7786c742c248-scripts\") pod \"33a30cd9-7e56-4a30-8b2d-7786c742c248\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987252 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-combined-ca-bundle\") pod \"33a30cd9-7e56-4a30-8b2d-7786c742c248\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987294 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run-ovn\") pod \"33a30cd9-7e56-4a30-8b2d-7786c742c248\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987420 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-ovn-controller-tls-certs\") pod \"33a30cd9-7e56-4a30-8b2d-7786c742c248\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987468 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26gtm\" (UniqueName: \"kubernetes.io/projected/33a30cd9-7e56-4a30-8b2d-7786c742c248-kube-api-access-26gtm\") pod \"33a30cd9-7e56-4a30-8b2d-7786c742c248\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987500 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-log-ovn\") pod \"33a30cd9-7e56-4a30-8b2d-7786c742c248\" (UID: \"33a30cd9-7e56-4a30-8b2d-7786c742c248\") " Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987678 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run" (OuterVolumeSpecName: "var-run") pod "33a30cd9-7e56-4a30-8b2d-7786c742c248" (UID: "33a30cd9-7e56-4a30-8b2d-7786c742c248"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.987885 4903 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.988303 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "33a30cd9-7e56-4a30-8b2d-7786c742c248" (UID: "33a30cd9-7e56-4a30-8b2d-7786c742c248"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.989234 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a30cd9-7e56-4a30-8b2d-7786c742c248-scripts" (OuterVolumeSpecName: "scripts") pod "33a30cd9-7e56-4a30-8b2d-7786c742c248" (UID: "33a30cd9-7e56-4a30-8b2d-7786c742c248"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.989285 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "33a30cd9-7e56-4a30-8b2d-7786c742c248" (UID: "33a30cd9-7e56-4a30-8b2d-7786c742c248"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:09:55 crc kubenswrapper[4903]: I0128 16:09:55.993646 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a30cd9-7e56-4a30-8b2d-7786c742c248-kube-api-access-26gtm" (OuterVolumeSpecName: "kube-api-access-26gtm") pod "33a30cd9-7e56-4a30-8b2d-7786c742c248" (UID: "33a30cd9-7e56-4a30-8b2d-7786c742c248"). InnerVolumeSpecName "kube-api-access-26gtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.010714 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33a30cd9-7e56-4a30-8b2d-7786c742c248" (UID: "33a30cd9-7e56-4a30-8b2d-7786c742c248"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.052429 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "33a30cd9-7e56-4a30-8b2d-7786c742c248" (UID: "33a30cd9-7e56-4a30-8b2d-7786c742c248"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.083256 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.090742 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.090778 4903 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.090792 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/33a30cd9-7e56-4a30-8b2d-7786c742c248-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.090807 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26gtm\" (UniqueName: \"kubernetes.io/projected/33a30cd9-7e56-4a30-8b2d-7786c742c248-kube-api-access-26gtm\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.090828 4903 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33a30cd9-7e56-4a30-8b2d-7786c742c248-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.090843 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33a30cd9-7e56-4a30-8b2d-7786c742c248-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.094238 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.102484 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.107868 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.366317 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.368372 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.368440 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.368729 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.368767 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.369824 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.370764 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.370793 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.423078 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6d6643-926c-4d0d-8986-a7c56e748e3f" path="/var/lib/kubelet/pods/1f6d6643-926c-4d0d-8986-a7c56e748e3f/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.424576 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" path="/var/lib/kubelet/pods/62f6e7cc-c41e-47b0-8b46-6ec53e998cbe/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.426405 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" path="/var/lib/kubelet/pods/9d45d584-dc21-48a4-842d-ab47fcfdd63d/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.428085 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef215ce-85eb-4148-848a-aeb5a15e343e" path="/var/lib/kubelet/pods/9ef215ce-85eb-4148-848a-aeb5a15e343e/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.429085 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac3a1bb-718a-42b1-9c87-71258a05b083" path="/var/lib/kubelet/pods/bac3a1bb-718a-42b1-9c87-71258a05b083/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.430235 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" path="/var/lib/kubelet/pods/bb51034c-4387-4aba-8eff-6ff960538da9/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.431500 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad02107-7c85-434e-aeb2-ab7a9924743d" path="/var/lib/kubelet/pods/cad02107-7c85-434e-aeb2-ab7a9924743d/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.432364 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" path="/var/lib/kubelet/pods/cee6442c-f9ef-4902-b6ec-2bc01a904849/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.433054 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c39267-5b08-4783-b267-7ee6395020f2" path="/var/lib/kubelet/pods/d3c39267-5b08-4783-b267-7ee6395020f2/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.433668 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" path="/var/lib/kubelet/pods/fb7483e7-0a5f-47dd-9f1a-baaed6822ffd/volumes" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.603648 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.613492 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.613558 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.673691 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.684094 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.708462 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-combined-ca-bundle\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.708844 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-config-data\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.708959 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-ceilometer-tls-certs\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.709471 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-scripts\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.709609 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-sg-core-conf-yaml\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.709716 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsmds\" (UniqueName: \"kubernetes.io/projected/07a65ed0-8012-4a4a-b973-8b1fcdafef52-kube-api-access-jsmds\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.711754 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-run-httpd\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.712335 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.716684 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g8tcr_33a30cd9-7e56-4a30-8b2d-7786c742c248/ovn-controller/0.log" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.717110 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g8tcr" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.717863 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g8tcr" event={"ID":"33a30cd9-7e56-4a30-8b2d-7786c742c248","Type":"ContainerDied","Data":"055cb5057de75ce2a7424b7bf377259c82047ce11a931e2a8586cc144da7b543"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.718059 4903 scope.go:117] "RemoveContainer" containerID="d788fb6f80b15b1916c1e431397434ddb83e22295a82de80156a3e89366081b1" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.718490 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-log-httpd\") pod \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\" (UID: \"07a65ed0-8012-4a4a-b973-8b1fcdafef52\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.718949 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.719449 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.719696 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.720613 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-scripts" (OuterVolumeSpecName: "scripts") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.724946 4903 generic.go:334] "Generic (PLEG): container finished" podID="777a1f56-3b78-4161-b388-22d924bf442c" containerID="57f5aead75f7ccb66670a88b340768f4042e67c223d457f4586543c309862540" exitCode=0 Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.725202 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df7b7b7fc-j8ps6" event={"ID":"777a1f56-3b78-4161-b388-22d924bf442c","Type":"ContainerDied","Data":"57f5aead75f7ccb66670a88b340768f4042e67c223d457f4586543c309862540"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.728626 4903 generic.go:334] "Generic (PLEG): container finished" podID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerID="843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e" exitCode=0 Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.728674 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerDied","Data":"843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.728695 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"07a65ed0-8012-4a4a-b973-8b1fcdafef52","Type":"ContainerDied","Data":"7834db687f4c4abcfb882be9e49644d6d743600f2ea5ff2f01a5f1dbde3c0e9f"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.728765 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.731480 4903 generic.go:334] "Generic (PLEG): container finished" podID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerID="c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7" exitCode=0 Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.731552 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" event={"ID":"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9","Type":"ContainerDied","Data":"c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.731574 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" event={"ID":"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9","Type":"ContainerDied","Data":"76e5a9fe1b05d7b4578120a0f31a2b3fe045b4a8f73ddaffc391b45091ddb9c5"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.731631 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-698d7dfbbb-d88kl" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.733356 4903 generic.go:334] "Generic (PLEG): container finished" podID="64646a57-b496-4bf3-8b63-d53321316304" containerID="deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210" exitCode=0 Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.733391 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" event={"ID":"64646a57-b496-4bf3-8b63-d53321316304","Type":"ContainerDied","Data":"deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.733405 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" event={"ID":"64646a57-b496-4bf3-8b63-d53321316304","Type":"ContainerDied","Data":"cf05763a6a3afc9c6044d15f18f630f4d0ebc978daa2ade57f18a815bc609544"} Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.733444 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd9f7788c-9rhk8" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.734106 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a65ed0-8012-4a4a-b973-8b1fcdafef52-kube-api-access-jsmds" (OuterVolumeSpecName: "kube-api-access-jsmds") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "kube-api-access-jsmds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.761724 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.773313 4903 scope.go:117] "RemoveContainer" containerID="7144e9f3e379f3b1c48972a79f95a4ca58fc84bde1c3b98a44aa1c439247a433" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.798883 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.798975 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.800346 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g8tcr"] Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.812896 4903 scope.go:117] "RemoveContainer" containerID="57f5aead75f7ccb66670a88b340768f4042e67c223d457f4586543c309862540" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.817907 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g8tcr"] Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821430 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-combined-ca-bundle\") pod \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821488 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data\") pod \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821517 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k549f\" (UniqueName: \"kubernetes.io/projected/777a1f56-3b78-4161-b388-22d924bf442c-kube-api-access-k549f\") pod \"777a1f56-3b78-4161-b388-22d924bf442c\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821608 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-internal-tls-certs\") pod \"777a1f56-3b78-4161-b388-22d924bf442c\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821670 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data\") pod \"64646a57-b496-4bf3-8b63-d53321316304\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821697 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data-custom\") pod \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821717 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-httpd-config\") pod \"777a1f56-3b78-4161-b388-22d924bf442c\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821738 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4sm8\" (UniqueName: \"kubernetes.io/projected/64646a57-b496-4bf3-8b63-d53321316304-kube-api-access-n4sm8\") pod \"64646a57-b496-4bf3-8b63-d53321316304\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821758 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-combined-ca-bundle\") pod \"777a1f56-3b78-4161-b388-22d924bf442c\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821843 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data-custom\") pod \"64646a57-b496-4bf3-8b63-d53321316304\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821858 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-ovndb-tls-certs\") pod \"777a1f56-3b78-4161-b388-22d924bf442c\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821905 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64646a57-b496-4bf3-8b63-d53321316304-logs\") pod \"64646a57-b496-4bf3-8b63-d53321316304\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821930 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r5gp\" (UniqueName: \"kubernetes.io/projected/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-kube-api-access-7r5gp\") pod \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821952 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-logs\") pod \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\" (UID: \"80fc9b4a-8eb0-41c9-8809-7d83f117c3b9\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821968 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-public-tls-certs\") pod \"777a1f56-3b78-4161-b388-22d924bf442c\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.821983 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-combined-ca-bundle\") pod \"64646a57-b496-4bf3-8b63-d53321316304\" (UID: \"64646a57-b496-4bf3-8b63-d53321316304\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822008 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-config\") pod \"777a1f56-3b78-4161-b388-22d924bf442c\" (UID: \"777a1f56-3b78-4161-b388-22d924bf442c\") " Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822262 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07a65ed0-8012-4a4a-b973-8b1fcdafef52-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822284 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822295 4903 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822304 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822313 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822324 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsmds\" (UniqueName: \"kubernetes.io/projected/07a65ed0-8012-4a4a-b973-8b1fcdafef52-kube-api-access-jsmds\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.822961 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-config-data" (OuterVolumeSpecName: "config-data") pod "07a65ed0-8012-4a4a-b973-8b1fcdafef52" (UID: "07a65ed0-8012-4a4a-b973-8b1fcdafef52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.824987 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-logs" (OuterVolumeSpecName: "logs") pod "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" (UID: "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.825002 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64646a57-b496-4bf3-8b63-d53321316304-logs" (OuterVolumeSpecName: "logs") pod "64646a57-b496-4bf3-8b63-d53321316304" (UID: "64646a57-b496-4bf3-8b63-d53321316304"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.828455 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" (UID: "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.828771 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64646a57-b496-4bf3-8b63-d53321316304" (UID: "64646a57-b496-4bf3-8b63-d53321316304"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.829264 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777a1f56-3b78-4161-b388-22d924bf442c-kube-api-access-k549f" (OuterVolumeSpecName: "kube-api-access-k549f") pod "777a1f56-3b78-4161-b388-22d924bf442c" (UID: "777a1f56-3b78-4161-b388-22d924bf442c"). InnerVolumeSpecName "kube-api-access-k549f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.829656 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-kube-api-access-7r5gp" (OuterVolumeSpecName: "kube-api-access-7r5gp") pod "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" (UID: "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9"). InnerVolumeSpecName "kube-api-access-7r5gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.831731 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64646a57-b496-4bf3-8b63-d53321316304-kube-api-access-n4sm8" (OuterVolumeSpecName: "kube-api-access-n4sm8") pod "64646a57-b496-4bf3-8b63-d53321316304" (UID: "64646a57-b496-4bf3-8b63-d53321316304"). InnerVolumeSpecName "kube-api-access-n4sm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.832634 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "777a1f56-3b78-4161-b388-22d924bf442c" (UID: "777a1f56-3b78-4161-b388-22d924bf442c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.854830 4903 scope.go:117] "RemoveContainer" containerID="245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.856430 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" (UID: "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.875978 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data" (OuterVolumeSpecName: "config-data") pod "64646a57-b496-4bf3-8b63-d53321316304" (UID: "64646a57-b496-4bf3-8b63-d53321316304"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.875916 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64646a57-b496-4bf3-8b63-d53321316304" (UID: "64646a57-b496-4bf3-8b63-d53321316304"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.880838 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data" (OuterVolumeSpecName: "config-data") pod "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" (UID: "80fc9b4a-8eb0-41c9-8809-7d83f117c3b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.881363 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-config" (OuterVolumeSpecName: "config") pod "777a1f56-3b78-4161-b388-22d924bf442c" (UID: "777a1f56-3b78-4161-b388-22d924bf442c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.886384 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "777a1f56-3b78-4161-b388-22d924bf442c" (UID: "777a1f56-3b78-4161-b388-22d924bf442c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.887957 4903 scope.go:117] "RemoveContainer" containerID="f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.891628 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "777a1f56-3b78-4161-b388-22d924bf442c" (UID: "777a1f56-3b78-4161-b388-22d924bf442c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.906742 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "777a1f56-3b78-4161-b388-22d924bf442c" (UID: "777a1f56-3b78-4161-b388-22d924bf442c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.908414 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "777a1f56-3b78-4161-b388-22d924bf442c" (UID: "777a1f56-3b78-4161-b388-22d924bf442c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.910765 4903 scope.go:117] "RemoveContainer" containerID="843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923362 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923388 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k549f\" (UniqueName: \"kubernetes.io/projected/777a1f56-3b78-4161-b388-22d924bf442c-kube-api-access-k549f\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923397 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923405 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07a65ed0-8012-4a4a-b973-8b1fcdafef52-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923413 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923423 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923431 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923439 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4sm8\" (UniqueName: \"kubernetes.io/projected/64646a57-b496-4bf3-8b63-d53321316304-kube-api-access-n4sm8\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923446 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923455 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923463 4903 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923471 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64646a57-b496-4bf3-8b63-d53321316304-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923480 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r5gp\" (UniqueName: \"kubernetes.io/projected/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-kube-api-access-7r5gp\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923489 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923496 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923504 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64646a57-b496-4bf3-8b63-d53321316304-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923512 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/777a1f56-3b78-4161-b388-22d924bf442c-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.923519 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.931386 4903 scope.go:117] "RemoveContainer" containerID="4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.957087 4903 scope.go:117] "RemoveContainer" containerID="245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77" Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.961614 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77\": container with ID starting with 245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77 not found: ID does not exist" containerID="245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.961676 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77"} err="failed to get container status \"245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77\": rpc error: code = NotFound desc = could not find container \"245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77\": container with ID starting with 245d7dfdd5014c3f381e88efac1c165d91a85ba2e10fd61efb97d807fcc76d77 not found: ID does not exist" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.961710 4903 scope.go:117] "RemoveContainer" containerID="f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978" Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.966091 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978\": container with ID starting with f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978 not found: ID does not exist" containerID="f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.966135 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978"} err="failed to get container status \"f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978\": rpc error: code = NotFound desc = could not find container \"f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978\": container with ID starting with f60334b37f91f53ef083e29617a7dce86ca61903a8c0daeceaad1c6d6a066978 not found: ID does not exist" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.966182 4903 scope.go:117] "RemoveContainer" containerID="843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e" Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.966803 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e\": container with ID starting with 843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e not found: ID does not exist" containerID="843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.966855 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e"} err="failed to get container status \"843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e\": rpc error: code = NotFound desc = could not find container \"843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e\": container with ID starting with 843afec5a2b2005a5467c72d2e0b3e02a3b5247c4b9937c3ffe41f2429e3dd2e not found: ID does not exist" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.966892 4903 scope.go:117] "RemoveContainer" containerID="4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1" Jan 28 16:09:56 crc kubenswrapper[4903]: E0128 16:09:56.967212 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1\": container with ID starting with 4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1 not found: ID does not exist" containerID="4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.967235 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1"} err="failed to get container status \"4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1\": rpc error: code = NotFound desc = could not find container \"4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1\": container with ID starting with 4c0e401237371b09f30a2679293c5e8f59735c0454744d8482892117934844a1 not found: ID does not exist" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.967249 4903 scope.go:117] "RemoveContainer" containerID="c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7" Jan 28 16:09:56 crc kubenswrapper[4903]: I0128 16:09:56.993650 4903 scope.go:117] "RemoveContainer" containerID="6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.014575 4903 scope.go:117] "RemoveContainer" containerID="c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7" Jan 28 16:09:57 crc kubenswrapper[4903]: E0128 16:09:57.015054 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7\": container with ID starting with c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7 not found: ID does not exist" containerID="c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.015099 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7"} err="failed to get container status \"c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7\": rpc error: code = NotFound desc = could not find container \"c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7\": container with ID starting with c013df5ddaae67d644c2e41bd397c55be521d1b493ab503a8561118492da39c7 not found: ID does not exist" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.015134 4903 scope.go:117] "RemoveContainer" containerID="6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228" Jan 28 16:09:57 crc kubenswrapper[4903]: E0128 16:09:57.015476 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228\": container with ID starting with 6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228 not found: ID does not exist" containerID="6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.015637 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228"} err="failed to get container status \"6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228\": rpc error: code = NotFound desc = could not find container \"6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228\": container with ID starting with 6a3505a1d9323fa56cef70330665a1a71c0d6b28317d72f00e2a9b7d736d6228 not found: ID does not exist" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.015669 4903 scope.go:117] "RemoveContainer" containerID="deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.039350 4903 scope.go:117] "RemoveContainer" containerID="6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.075678 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.085930 4903 scope.go:117] "RemoveContainer" containerID="deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210" Jan 28 16:09:57 crc kubenswrapper[4903]: E0128 16:09:57.086709 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210\": container with ID starting with deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210 not found: ID does not exist" containerID="deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.086779 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210"} err="failed to get container status \"deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210\": rpc error: code = NotFound desc = could not find container \"deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210\": container with ID starting with deb7acde75722a7823be48d6815a82cf925353c1e396c369c696dafbfbc70210 not found: ID does not exist" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.086818 4903 scope.go:117] "RemoveContainer" containerID="6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82" Jan 28 16:09:57 crc kubenswrapper[4903]: E0128 16:09:57.087217 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82\": container with ID starting with 6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82 not found: ID does not exist" containerID="6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.087236 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82"} err="failed to get container status \"6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82\": rpc error: code = NotFound desc = could not find container \"6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82\": container with ID starting with 6129497daefb895350042440192e5bbda20c658b973526fe793bbdf47d4b6c82 not found: ID does not exist" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.092704 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.110671 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cd9f7788c-9rhk8"] Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.123593 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5cd9f7788c-9rhk8"] Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.128706 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-698d7dfbbb-d88kl"] Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.135460 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-698d7dfbbb-d88kl"] Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.750021 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-df7b7b7fc-j8ps6" Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.750020 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-df7b7b7fc-j8ps6" event={"ID":"777a1f56-3b78-4161-b388-22d924bf442c","Type":"ContainerDied","Data":"58ab758700768ed2a02ccd2d856851248ce75d0485b59989c49a652a32abcc68"} Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.793635 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-df7b7b7fc-j8ps6"] Jan 28 16:09:57 crc kubenswrapper[4903]: I0128 16:09:57.800321 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-df7b7b7fc-j8ps6"] Jan 28 16:09:58 crc kubenswrapper[4903]: I0128 16:09:58.425198 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" path="/var/lib/kubelet/pods/07a65ed0-8012-4a4a-b973-8b1fcdafef52/volumes" Jan 28 16:09:58 crc kubenswrapper[4903]: I0128 16:09:58.426113 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" path="/var/lib/kubelet/pods/33a30cd9-7e56-4a30-8b2d-7786c742c248/volumes" Jan 28 16:09:58 crc kubenswrapper[4903]: I0128 16:09:58.426737 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64646a57-b496-4bf3-8b63-d53321316304" path="/var/lib/kubelet/pods/64646a57-b496-4bf3-8b63-d53321316304/volumes" Jan 28 16:09:58 crc kubenswrapper[4903]: I0128 16:09:58.427957 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777a1f56-3b78-4161-b388-22d924bf442c" path="/var/lib/kubelet/pods/777a1f56-3b78-4161-b388-22d924bf442c/volumes" Jan 28 16:09:58 crc kubenswrapper[4903]: I0128 16:09:58.428553 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" path="/var/lib/kubelet/pods/80fc9b4a-8eb0-41c9-8809-7d83f117c3b9/volumes" Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.364907 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.367103 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.367178 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.368117 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.368223 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.368509 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.369547 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:01 crc kubenswrapper[4903]: E0128 16:10:01.369646 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.365106 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.365990 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.366613 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.366726 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.367616 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.369122 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.370450 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:06 crc kubenswrapper[4903]: E0128 16:10:06.370575 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.364288 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.365107 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.365521 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.365621 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.366596 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.369039 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.370561 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:11 crc kubenswrapper[4903]: E0128 16:10:11.370677 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.364622 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.365279 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.365682 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.365721 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.367397 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.368660 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.369727 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 16:10:16 crc kubenswrapper[4903]: E0128 16:10:16.369767 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-sdvpf" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:10:17 crc kubenswrapper[4903]: I0128 16:10:17.937984 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sdvpf_87970b20-51e0-4e11-875a-8dea3b633ac5/ovs-vswitchd/0.log" Jan 28 16:10:17 crc kubenswrapper[4903]: I0128 16:10:17.939280 4903 generic.go:334] "Generic (PLEG): container finished" podID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" exitCode=137 Jan 28 16:10:17 crc kubenswrapper[4903]: I0128 16:10:17.939348 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sdvpf" event={"ID":"87970b20-51e0-4e11-875a-8dea3b633ac5","Type":"ContainerDied","Data":"2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e"} Jan 28 16:10:17 crc kubenswrapper[4903]: I0128 16:10:17.948360 4903 generic.go:334] "Generic (PLEG): container finished" podID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerID="427c2da60bfa90da8ebbfb150ccfb94366c48918a404ebdd1894102608ea88f1" exitCode=137 Jan 28 16:10:17 crc kubenswrapper[4903]: I0128 16:10:17.948399 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"427c2da60bfa90da8ebbfb150ccfb94366c48918a404ebdd1894102608ea88f1"} Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.096153 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sdvpf_87970b20-51e0-4e11-875a-8dea3b633ac5/ovs-vswitchd/0.log" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.097554 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.109865 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281226 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87970b20-51e0-4e11-875a-8dea3b633ac5-scripts\") pod \"87970b20-51e0-4e11-875a-8dea3b633ac5\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281320 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-etc-ovs\") pod \"87970b20-51e0-4e11-875a-8dea3b633ac5\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281364 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-lib\") pod \"87970b20-51e0-4e11-875a-8dea3b633ac5\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281454 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-cache\") pod \"2fe73c5e-1acc-4125-8ff9-e42b69488039\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281482 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "87970b20-51e0-4e11-875a-8dea3b633ac5" (UID: "87970b20-51e0-4e11-875a-8dea3b633ac5"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281616 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2fe73c5e-1acc-4125-8ff9-e42b69488039\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281613 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-lib" (OuterVolumeSpecName: "var-lib") pod "87970b20-51e0-4e11-875a-8dea3b633ac5" (UID: "87970b20-51e0-4e11-875a-8dea3b633ac5"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281705 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m6sr\" (UniqueName: \"kubernetes.io/projected/87970b20-51e0-4e11-875a-8dea3b633ac5-kube-api-access-2m6sr\") pod \"87970b20-51e0-4e11-875a-8dea3b633ac5\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281817 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-run\") pod \"87970b20-51e0-4e11-875a-8dea3b633ac5\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281864 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsbxq\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-kube-api-access-bsbxq\") pod \"2fe73c5e-1acc-4125-8ff9-e42b69488039\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281897 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-lock\") pod \"2fe73c5e-1acc-4125-8ff9-e42b69488039\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281925 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-run" (OuterVolumeSpecName: "var-run") pod "87970b20-51e0-4e11-875a-8dea3b633ac5" (UID: "87970b20-51e0-4e11-875a-8dea3b633ac5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281922 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-log\") pod \"87970b20-51e0-4e11-875a-8dea3b633ac5\" (UID: \"87970b20-51e0-4e11-875a-8dea3b633ac5\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.281953 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-log" (OuterVolumeSpecName: "var-log") pod "87970b20-51e0-4e11-875a-8dea3b633ac5" (UID: "87970b20-51e0-4e11-875a-8dea3b633ac5"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282079 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") pod \"2fe73c5e-1acc-4125-8ff9-e42b69488039\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282169 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe73c5e-1acc-4125-8ff9-e42b69488039-combined-ca-bundle\") pod \"2fe73c5e-1acc-4125-8ff9-e42b69488039\" (UID: \"2fe73c5e-1acc-4125-8ff9-e42b69488039\") " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282327 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-lock" (OuterVolumeSpecName: "lock") pod "2fe73c5e-1acc-4125-8ff9-e42b69488039" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282580 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-cache" (OuterVolumeSpecName: "cache") pod "2fe73c5e-1acc-4125-8ff9-e42b69488039" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282835 4903 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282876 4903 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-lib\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282896 4903 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-cache\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282914 4903 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282931 4903 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2fe73c5e-1acc-4125-8ff9-e42b69488039-lock\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282948 4903 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/87970b20-51e0-4e11-875a-8dea3b633ac5-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.282835 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87970b20-51e0-4e11-875a-8dea3b633ac5-scripts" (OuterVolumeSpecName: "scripts") pod "87970b20-51e0-4e11-875a-8dea3b633ac5" (UID: "87970b20-51e0-4e11-875a-8dea3b633ac5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.286251 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "swift") pod "2fe73c5e-1acc-4125-8ff9-e42b69488039" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.286271 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-kube-api-access-bsbxq" (OuterVolumeSpecName: "kube-api-access-bsbxq") pod "2fe73c5e-1acc-4125-8ff9-e42b69488039" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039"). InnerVolumeSpecName "kube-api-access-bsbxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.286754 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87970b20-51e0-4e11-875a-8dea3b633ac5-kube-api-access-2m6sr" (OuterVolumeSpecName: "kube-api-access-2m6sr") pod "87970b20-51e0-4e11-875a-8dea3b633ac5" (UID: "87970b20-51e0-4e11-875a-8dea3b633ac5"). InnerVolumeSpecName "kube-api-access-2m6sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.287603 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2fe73c5e-1acc-4125-8ff9-e42b69488039" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.384208 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.384247 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m6sr\" (UniqueName: \"kubernetes.io/projected/87970b20-51e0-4e11-875a-8dea3b633ac5-kube-api-access-2m6sr\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.384297 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsbxq\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-kube-api-access-bsbxq\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.384316 4903 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2fe73c5e-1acc-4125-8ff9-e42b69488039-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.384329 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87970b20-51e0-4e11-875a-8dea3b633ac5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.399033 4903 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.485411 4903 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.552374 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fe73c5e-1acc-4125-8ff9-e42b69488039-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fe73c5e-1acc-4125-8ff9-e42b69488039" (UID: "2fe73c5e-1acc-4125-8ff9-e42b69488039"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.587605 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe73c5e-1acc-4125-8ff9-e42b69488039-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.963695 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sdvpf_87970b20-51e0-4e11-875a-8dea3b633ac5/ovs-vswitchd/0.log" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.965309 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sdvpf" event={"ID":"87970b20-51e0-4e11-875a-8dea3b633ac5","Type":"ContainerDied","Data":"539b3793e3b662d216d7ba3d666e6bedd07dbff5f41e72bbfd01466f59cc881e"} Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.965354 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sdvpf" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.965371 4903 scope.go:117] "RemoveContainer" containerID="2d83b04d55a1c5e4247a6091dd586f7d6d3929782176eeb781ada83383a3666e" Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.980781 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2fe73c5e-1acc-4125-8ff9-e42b69488039","Type":"ContainerDied","Data":"3765f199c704975ce06bc8b7409ecc4c6569b7c5b0066810a89fd957f5e42637"} Jan 28 16:10:18 crc kubenswrapper[4903]: I0128 16:10:18.980896 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.000597 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-sdvpf"] Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.003780 4903 scope.go:117] "RemoveContainer" containerID="7ac5c45fd5c54c4b1332e6ae2714ab9d338f6ad70d217118dc65f4f3be7b659d" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.009626 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-sdvpf"] Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.027299 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.032008 4903 scope.go:117] "RemoveContainer" containerID="440dbc2bb9f9bafe7da3797b181ac6fd61e61287c01229fa2c3e01029fad65ee" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.033579 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.058894 4903 scope.go:117] "RemoveContainer" containerID="427c2da60bfa90da8ebbfb150ccfb94366c48918a404ebdd1894102608ea88f1" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.101635 4903 scope.go:117] "RemoveContainer" containerID="fdfe4956af02ae007c08b5307ab6872b8e0595452ba36784decb8edd4b8a5d9b" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.123487 4903 scope.go:117] "RemoveContainer" containerID="5f7182de515dde6ed72737089f102bb7c64b5bceae2ea9dd0e07b98590e0126b" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.146833 4903 scope.go:117] "RemoveContainer" containerID="fddb56423e806702e1b6dee36e7347c017a45be9d08b635bb4e199df0eb3489e" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.167832 4903 scope.go:117] "RemoveContainer" containerID="bbcf62a11c97c0772b915ab52c7b8ed5336a2b9f1735f7d74650ddbac7968b3f" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.194894 4903 scope.go:117] "RemoveContainer" containerID="eebba63abd410036bd2f597b488df5fd3fc712afc83ddb919fb3f33d78e82010" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.214211 4903 scope.go:117] "RemoveContainer" containerID="49fa880f8fb88d223229db177857faa713b2086ac01e656664ea7ecec2ee6237" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.233385 4903 scope.go:117] "RemoveContainer" containerID="987273170f201bd99282bf5c33154171012fac1d73596bce885546d8d13a8681" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.249731 4903 scope.go:117] "RemoveContainer" containerID="eb7902754910c952a0e047350a7096399669542b9269940b5d03b5d9577fabae" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.267373 4903 scope.go:117] "RemoveContainer" containerID="9ec33b0218cbf5be31eaa4605b066cecb134d4131c4136762bbbf8bceaed18e9" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.284808 4903 scope.go:117] "RemoveContainer" containerID="2077d11c701d11f3d5b9f94bf673c99cd175858ca2ee3f9f5496123712d24aa8" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.301064 4903 scope.go:117] "RemoveContainer" containerID="c78ef9751a8dce58d95c9353ff8051a2fbe27f2886b49daeb6742161a84e3b25" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.329065 4903 scope.go:117] "RemoveContainer" containerID="1902647852c72d50cd7f7eba6e1b998be88fa3e8bce1292d120aa7ad36fcce6a" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.350038 4903 scope.go:117] "RemoveContainer" containerID="05bd562da8eff098ad5295672772555c223f358c232a73d480a9a4208fbc2f2e" Jan 28 16:10:19 crc kubenswrapper[4903]: I0128 16:10:19.374138 4903 scope.go:117] "RemoveContainer" containerID="8d6925cdba582789ace3400817f99ef5a11fa5573bf42b9183b2310d83669949" Jan 28 16:10:20 crc kubenswrapper[4903]: I0128 16:10:20.440417 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" path="/var/lib/kubelet/pods/2fe73c5e-1acc-4125-8ff9-e42b69488039/volumes" Jan 28 16:10:20 crc kubenswrapper[4903]: I0128 16:10:20.444672 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" path="/var/lib/kubelet/pods/87970b20-51e0-4e11-875a-8dea3b633ac5/volumes" Jan 28 16:10:26 crc kubenswrapper[4903]: I0128 16:10:26.613325 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:10:26 crc kubenswrapper[4903]: I0128 16:10:26.613883 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.573021 4903 scope.go:117] "RemoveContainer" containerID="93d92bf9f774ba6a91d35fc3a13bb44ccf661c03287e032bf834bdf903270404" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.598203 4903 scope.go:117] "RemoveContainer" containerID="5d513371bbd71efb4cfca671c88930c921d810f556a12fe47c138227552f0fd8" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.630473 4903 scope.go:117] "RemoveContainer" containerID="9b433c7bd6b0342ec2ec13718d0984a80ca303c5b8c24a199c67fdf90da8fac2" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.652985 4903 scope.go:117] "RemoveContainer" containerID="72e3d39e506db97742eaf666cdaf176e8b5d1a71197a7c4eac4d6f32e7609458" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.673705 4903 scope.go:117] "RemoveContainer" containerID="975b2667bbd07ddf57bd0f6d098ed253d88ff1fdcab71160b786c8ff77db9693" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.697904 4903 scope.go:117] "RemoveContainer" containerID="115db9d03452ef27c97e4292c7d8d47526c8e5ede6cf99f55017f73a5b5958ea" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.718636 4903 scope.go:117] "RemoveContainer" containerID="c9737876cfc45ddfa760ab048b1fc0e74d864b2ca9ea30b9db395e230f2a4200" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.742480 4903 scope.go:117] "RemoveContainer" containerID="75cf84f6bdd8f7c3ccdacd4f16f7ccf2eb0b296a54dbd45763753d9cfa08eb89" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.771618 4903 scope.go:117] "RemoveContainer" containerID="0d1ea7e821d03a7c32ddf82f43f1bb77a4c18f114b371b772fdfa930d4a338f5" Jan 28 16:10:49 crc kubenswrapper[4903]: I0128 16:10:49.809166 4903 scope.go:117] "RemoveContainer" containerID="d8a74584b686d6ab5913a3d1a5bdaf5d4115fabca3b023a2faf39781ba497fbe" Jan 28 16:10:56 crc kubenswrapper[4903]: I0128 16:10:56.614349 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:10:56 crc kubenswrapper[4903]: I0128 16:10:56.615879 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:10:56 crc kubenswrapper[4903]: I0128 16:10:56.615985 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:10:56 crc kubenswrapper[4903]: I0128 16:10:56.616716 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:10:56 crc kubenswrapper[4903]: I0128 16:10:56.616792 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" gracePeriod=600 Jan 28 16:10:56 crc kubenswrapper[4903]: E0128 16:10:56.745199 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:10:57 crc kubenswrapper[4903]: I0128 16:10:57.369124 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" exitCode=0 Jan 28 16:10:57 crc kubenswrapper[4903]: I0128 16:10:57.369177 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c"} Jan 28 16:10:57 crc kubenswrapper[4903]: I0128 16:10:57.369563 4903 scope.go:117] "RemoveContainer" containerID="993067151bbc38bd867efd2a0048a350ec2c3e1b2fa7b3b79554189c276ba379" Jan 28 16:10:57 crc kubenswrapper[4903]: I0128 16:10:57.370230 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:10:57 crc kubenswrapper[4903]: E0128 16:10:57.370705 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:11:10 crc kubenswrapper[4903]: I0128 16:11:10.413007 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:11:10 crc kubenswrapper[4903]: E0128 16:11:10.413754 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:11:21 crc kubenswrapper[4903]: I0128 16:11:21.413824 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:11:21 crc kubenswrapper[4903]: E0128 16:11:21.414640 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:11:34 crc kubenswrapper[4903]: I0128 16:11:34.413123 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:11:34 crc kubenswrapper[4903]: E0128 16:11:34.414094 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:11:45 crc kubenswrapper[4903]: I0128 16:11:45.414216 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:11:45 crc kubenswrapper[4903]: E0128 16:11:45.414915 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.376411 4903 scope.go:117] "RemoveContainer" containerID="ce9969752253223f3d742d8c53554034aef2e9373de610bd14d5da8524527791" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.409725 4903 scope.go:117] "RemoveContainer" containerID="f3ee9e5cfbbafd5dc56d89c7e74393a28ffee89903a0a3a5fc996e7f437166e1" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.443002 4903 scope.go:117] "RemoveContainer" containerID="25961eeabd65eca0650b5d0be864c09befff100fb2eaf5027e16291818437b2f" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.467645 4903 scope.go:117] "RemoveContainer" containerID="b0ba7bf51857c58dce88f9dc8f3151005562f9b649e33a130e03fb6d753bfd31" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.488121 4903 scope.go:117] "RemoveContainer" containerID="b28636be26f25d88749455781312c6f8a09daa88d13b8906d341951f0018609b" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.510283 4903 scope.go:117] "RemoveContainer" containerID="55f8bf6d429541e01c475597f5351c29e93f7bfc9a5aa0340d04790b146db9ba" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.535972 4903 scope.go:117] "RemoveContainer" containerID="5780ec06d8ceb9a89fcd3d92e75fb12da978f6318a08411b3440fd4a059a15b6" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.551381 4903 scope.go:117] "RemoveContainer" containerID="c7bf1e8f41ac47e5ad10262b8826c4d9516f64bb9a727ac6db342e3fd3db3370" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.593437 4903 scope.go:117] "RemoveContainer" containerID="e0e80f47839bbcb8f5346c467d9fee38bcbb41843ae018369013a7744ee00b5b" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.619421 4903 scope.go:117] "RemoveContainer" containerID="5e4efe128a7bf150172b57c3c25cab4bc80693cce0b769bf104d5de605e7d6cd" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.642647 4903 scope.go:117] "RemoveContainer" containerID="872e24b6cedba9cb408f04bf14e0fe63bb921732f112295d5895a6b7b077fee6" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.677311 4903 scope.go:117] "RemoveContainer" containerID="630d1568fb7af1b219114384dc4e2056041faa5abd0a851fa1ecc695972d5996" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.697134 4903 scope.go:117] "RemoveContainer" containerID="88ff76f959567db6f39526feebffb81b78f5255275de9bbc4a80749b058e9db4" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.722957 4903 scope.go:117] "RemoveContainer" containerID="aa106559349288d080d838447b60274c9745f90e6b33e2f44943504bea86dd3f" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.740810 4903 scope.go:117] "RemoveContainer" containerID="51120ba1ed6d76e83c977236b133e7a9a3d15e90becedbbdf05053eb8c96eb2b" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.770046 4903 scope.go:117] "RemoveContainer" containerID="726d659d440d5494927b0d694b4e4cf744221303a1fb4b4596b02e56d758859c" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.786423 4903 scope.go:117] "RemoveContainer" containerID="f4cf9c1424be6cb1b11b137ac05431dbac51c733f0ebb6bb50c0edf731b0838d" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.805781 4903 scope.go:117] "RemoveContainer" containerID="fc9ecb8ba71fe2f33aa423e27d386ee156e01204e113825ea9be4174fda6a516" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.828341 4903 scope.go:117] "RemoveContainer" containerID="99680c3ae0227fe3f1b5f6393451329ff41529f7a08e190be62136a8e1bc203e" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.867445 4903 scope.go:117] "RemoveContainer" containerID="7b254ac934a2239d6d4a13a900aec90e10f52506dada4040c9739c1b25c9d748" Jan 28 16:11:50 crc kubenswrapper[4903]: I0128 16:11:50.901841 4903 scope.go:117] "RemoveContainer" containerID="9218080a89992ee3b663ba0a8a93799448851ac87830265472a684a880afd6b0" Jan 28 16:11:56 crc kubenswrapper[4903]: I0128 16:11:56.415288 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:11:56 crc kubenswrapper[4903]: E0128 16:11:56.416229 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:12:07 crc kubenswrapper[4903]: I0128 16:12:07.413339 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:12:07 crc kubenswrapper[4903]: E0128 16:12:07.414326 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:12:18 crc kubenswrapper[4903]: I0128 16:12:18.418453 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:12:18 crc kubenswrapper[4903]: E0128 16:12:18.419495 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:12:29 crc kubenswrapper[4903]: I0128 16:12:29.413671 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:12:29 crc kubenswrapper[4903]: E0128 16:12:29.414406 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.443852 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vppds"] Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.445466 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="openstack-network-exporter" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.445595 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="openstack-network-exporter" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.445686 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.445757 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-server" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.445840 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.445921 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-server" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.445980 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446031 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446081 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446132 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446201 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446281 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446345 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="ovn-northd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446403 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="ovn-northd" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446459 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446508 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446583 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446652 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446735 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerName="ovn-controller" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446806 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerName="ovn-controller" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446878 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="rsync" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.446933 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="rsync" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.446984 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="sg-core" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.447038 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="sg-core" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.447111 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.447176 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.447229 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.447298 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.447350 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.447407 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.447507 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.447678 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.447742 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" containerName="kube-state-metrics" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.447792 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" containerName="kube-state-metrics" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.447842 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.447890 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.447943 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448019 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.448090 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-reaper" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448140 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-reaper" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.448194 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-notification-agent" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448243 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-notification-agent" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.448295 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d08ed75-05f7-4c45-bc6e-0562a7bbb936" containerName="nova-cell0-conductor-conductor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448351 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d08ed75-05f7-4c45-bc6e-0562a7bbb936" containerName="nova-cell0-conductor-conductor" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.448444 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef215ce-85eb-4148-848a-aeb5a15e343e" containerName="nova-scheduler-scheduler" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448516 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef215ce-85eb-4148-848a-aeb5a15e343e" containerName="nova-scheduler-scheduler" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.448631 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448703 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.448770 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-expirer" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448838 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-expirer" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.448903 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac3a1bb-718a-42b1-9c87-71258a05b083" containerName="memcached" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.448961 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac3a1bb-718a-42b1-9c87-71258a05b083" containerName="memcached" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449018 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerName="setup-container" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449076 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerName="setup-container" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449135 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6d6643-926c-4d0d-8986-a7c56e748e3f" containerName="keystone-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449196 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6d6643-926c-4d0d-8986-a7c56e748e3f" containerName="keystone-api" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449259 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" containerName="setup-container" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449317 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" containerName="setup-container" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449377 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449446 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-api" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449520 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c39267-5b08-4783-b267-7ee6395020f2" containerName="nova-cell1-conductor-conductor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449616 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c39267-5b08-4783-b267-7ee6395020f2" containerName="nova-cell1-conductor-conductor" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449676 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449727 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449774 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449825 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-server" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449876 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-metadata" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.449924 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-metadata" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.449981 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="proxy-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.450034 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="proxy-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.450086 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="swift-recon-cron" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.450134 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="swift-recon-cron" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.450184 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-central-agent" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.450241 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-central-agent" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.450320 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" containerName="rabbitmq" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.450389 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" containerName="rabbitmq" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.450467 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.450558 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.450635 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.450705 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.450784 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-updater" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.450856 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-updater" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.450931 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server-init" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.451120 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server-init" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.451207 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerName="rabbitmq" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.451308 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerName="rabbitmq" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.451365 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.451419 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-api" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.451479 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.451569 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.451623 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerName="mysql-bootstrap" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.451676 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerName="mysql-bootstrap" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.451730 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.451791 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.451864 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.451921 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.451971 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-updater" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.452020 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-updater" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.452071 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.452122 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.452310 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.452362 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-api" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.452414 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.452473 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.452563 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.452626 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-log" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.452688 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.452741 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:12:32 crc kubenswrapper[4903]: E0128 16:12:32.452795 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerName="galera" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.452843 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerName="galera" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453048 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac3a1bb-718a-42b1-9c87-71258a05b083" containerName="memcached" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453106 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-notification-agent" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453163 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="sg-core" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453215 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453269 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453339 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453433 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453508 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453621 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="proxy-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453703 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453777 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-reaper" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453835 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453889 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a65ed0-8012-4a4a-b973-8b1fcdafef52" containerName="ceilometer-central-agent" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.453944 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6d6643-926c-4d0d-8986-a7c56e748e3f" containerName="keystone-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454004 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454076 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454136 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3ca866-aac2-4b4f-ac25-71e741d9db2f" containerName="glance-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454190 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454244 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="rsync" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454318 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454399 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="openstack-network-exporter" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454486 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d45d584-dc21-48a4-842d-ab47fcfdd63d" containerName="galera" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454577 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ee3fde-97aa-4dd8-a083-87dfdc8fb1ba" containerName="kube-state-metrics" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454670 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454742 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f1f4e5-22a4-420b-b6f2-8f936c5c39c9" containerName="nova-api-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454812 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454892 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovs-vswitchd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.454970 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455046 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-updater" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455125 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef215ce-85eb-4148-848a-aeb5a15e343e" containerName="nova-scheduler-scheduler" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455210 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee6442c-f9ef-4902-b6ec-2bc01a904849" containerName="rabbitmq" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455306 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a30cd9-7e56-4a30-8b2d-7786c742c248" containerName="ovn-controller" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455379 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7483e7-0a5f-47dd-9f1a-baaed6822ffd" containerName="glance-httpd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455445 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455548 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="swift-recon-cron" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455618 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="64646a57-b496-4bf3-8b63-d53321316304" containerName="barbican-worker-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455691 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c39267-5b08-4783-b267-7ee6395020f2" containerName="nova-cell1-conductor-conductor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455755 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="87970b20-51e0-4e11-875a-8dea3b633ac5" containerName="ovsdb-server" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.455831 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f6e7cc-c41e-47b0-8b46-6ec53e998cbe" containerName="ovn-northd" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.458378 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.458458 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="777a1f56-3b78-4161-b388-22d924bf442c" containerName="neutron-api" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.458638 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-updater" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.458868 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="object-expirer" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.459005 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="80fc9b4a-8eb0-41c9-8809-7d83f117c3b9" containerName="barbican-keystone-listener" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.459774 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4f5f43-7fbc-41d1-935d-b0844db162a7" containerName="nova-metadata-metadata" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.459870 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="438d1db6-7b20-4f31-8a43-aa8f0c972501" containerName="barbican-api-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.459954 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="account-auditor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.460032 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb51034c-4387-4aba-8eff-6ff960538da9" containerName="rabbitmq" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.460105 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d08ed75-05f7-4c45-bc6e-0562a7bbb936" containerName="nova-cell0-conductor-conductor" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.460162 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91d56c5-1ada-417a-8a87-dc4e3960a186" containerName="placement-log" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.460213 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe73c5e-1acc-4125-8ff9-e42b69488039" containerName="container-replicator" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.461668 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.462843 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vppds"] Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.526672 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-utilities\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.526719 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxk9\" (UniqueName: \"kubernetes.io/projected/9545d344-df18-44d5-9379-98c57b58345c-kube-api-access-rnxk9\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.526760 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-catalog-content\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.628841 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-utilities\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.628903 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxk9\" (UniqueName: \"kubernetes.io/projected/9545d344-df18-44d5-9379-98c57b58345c-kube-api-access-rnxk9\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.628942 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-catalog-content\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.629361 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-utilities\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.629453 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-catalog-content\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.649865 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxk9\" (UniqueName: \"kubernetes.io/projected/9545d344-df18-44d5-9379-98c57b58345c-kube-api-access-rnxk9\") pod \"community-operators-vppds\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:32 crc kubenswrapper[4903]: I0128 16:12:32.780197 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:33 crc kubenswrapper[4903]: I0128 16:12:33.273132 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vppds"] Jan 28 16:12:34 crc kubenswrapper[4903]: I0128 16:12:34.221784 4903 generic.go:334] "Generic (PLEG): container finished" podID="9545d344-df18-44d5-9379-98c57b58345c" containerID="4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74" exitCode=0 Jan 28 16:12:34 crc kubenswrapper[4903]: I0128 16:12:34.222094 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vppds" event={"ID":"9545d344-df18-44d5-9379-98c57b58345c","Type":"ContainerDied","Data":"4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74"} Jan 28 16:12:34 crc kubenswrapper[4903]: I0128 16:12:34.222129 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vppds" event={"ID":"9545d344-df18-44d5-9379-98c57b58345c","Type":"ContainerStarted","Data":"355681d2ae47875e4bafad0615ee479e3e34c73bdd870743d2487fb89d5e6ddd"} Jan 28 16:12:35 crc kubenswrapper[4903]: I0128 16:12:35.229685 4903 generic.go:334] "Generic (PLEG): container finished" podID="9545d344-df18-44d5-9379-98c57b58345c" containerID="e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042" exitCode=0 Jan 28 16:12:35 crc kubenswrapper[4903]: I0128 16:12:35.229720 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vppds" event={"ID":"9545d344-df18-44d5-9379-98c57b58345c","Type":"ContainerDied","Data":"e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042"} Jan 28 16:12:36 crc kubenswrapper[4903]: I0128 16:12:36.238627 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vppds" event={"ID":"9545d344-df18-44d5-9379-98c57b58345c","Type":"ContainerStarted","Data":"4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774"} Jan 28 16:12:36 crc kubenswrapper[4903]: I0128 16:12:36.258019 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vppds" podStartSLOduration=2.85751329 podStartE2EDuration="4.257999109s" podCreationTimestamp="2026-01-28 16:12:32 +0000 UTC" firstStartedPulling="2026-01-28 16:12:34.223598901 +0000 UTC m=+1626.499570422" lastFinishedPulling="2026-01-28 16:12:35.62408473 +0000 UTC m=+1627.900056241" observedRunningTime="2026-01-28 16:12:36.254133734 +0000 UTC m=+1628.530105235" watchObservedRunningTime="2026-01-28 16:12:36.257999109 +0000 UTC m=+1628.533970620" Jan 28 16:12:42 crc kubenswrapper[4903]: I0128 16:12:42.780633 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:42 crc kubenswrapper[4903]: I0128 16:12:42.781496 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:42 crc kubenswrapper[4903]: I0128 16:12:42.828708 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:43 crc kubenswrapper[4903]: I0128 16:12:43.343321 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:43 crc kubenswrapper[4903]: I0128 16:12:43.394097 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vppds"] Jan 28 16:12:43 crc kubenswrapper[4903]: I0128 16:12:43.413865 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:12:43 crc kubenswrapper[4903]: E0128 16:12:43.414070 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.313387 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vppds" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="registry-server" containerID="cri-o://4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774" gracePeriod=2 Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.681266 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.731416 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-utilities\") pod \"9545d344-df18-44d5-9379-98c57b58345c\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.731549 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxk9\" (UniqueName: \"kubernetes.io/projected/9545d344-df18-44d5-9379-98c57b58345c-kube-api-access-rnxk9\") pod \"9545d344-df18-44d5-9379-98c57b58345c\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.731588 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-catalog-content\") pod \"9545d344-df18-44d5-9379-98c57b58345c\" (UID: \"9545d344-df18-44d5-9379-98c57b58345c\") " Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.732542 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-utilities" (OuterVolumeSpecName: "utilities") pod "9545d344-df18-44d5-9379-98c57b58345c" (UID: "9545d344-df18-44d5-9379-98c57b58345c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.744139 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9545d344-df18-44d5-9379-98c57b58345c-kube-api-access-rnxk9" (OuterVolumeSpecName: "kube-api-access-rnxk9") pod "9545d344-df18-44d5-9379-98c57b58345c" (UID: "9545d344-df18-44d5-9379-98c57b58345c"). InnerVolumeSpecName "kube-api-access-rnxk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.832991 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:45 crc kubenswrapper[4903]: I0128 16:12:45.833018 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxk9\" (UniqueName: \"kubernetes.io/projected/9545d344-df18-44d5-9379-98c57b58345c-kube-api-access-rnxk9\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.289692 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9545d344-df18-44d5-9379-98c57b58345c" (UID: "9545d344-df18-44d5-9379-98c57b58345c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.326355 4903 generic.go:334] "Generic (PLEG): container finished" podID="9545d344-df18-44d5-9379-98c57b58345c" containerID="4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774" exitCode=0 Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.326407 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vppds" event={"ID":"9545d344-df18-44d5-9379-98c57b58345c","Type":"ContainerDied","Data":"4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774"} Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.326438 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vppds" event={"ID":"9545d344-df18-44d5-9379-98c57b58345c","Type":"ContainerDied","Data":"355681d2ae47875e4bafad0615ee479e3e34c73bdd870743d2487fb89d5e6ddd"} Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.326458 4903 scope.go:117] "RemoveContainer" containerID="4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.326494 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vppds" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.339934 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9545d344-df18-44d5-9379-98c57b58345c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.353948 4903 scope.go:117] "RemoveContainer" containerID="e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.376853 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vppds"] Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.384140 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vppds"] Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.384517 4903 scope.go:117] "RemoveContainer" containerID="4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.406998 4903 scope.go:117] "RemoveContainer" containerID="4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774" Jan 28 16:12:46 crc kubenswrapper[4903]: E0128 16:12:46.407370 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774\": container with ID starting with 4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774 not found: ID does not exist" containerID="4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.407400 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774"} err="failed to get container status \"4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774\": rpc error: code = NotFound desc = could not find container \"4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774\": container with ID starting with 4f6cf91c48d2e788c406f882c6294db0a4e47344a546f78f875003ba663fd774 not found: ID does not exist" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.407420 4903 scope.go:117] "RemoveContainer" containerID="e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042" Jan 28 16:12:46 crc kubenswrapper[4903]: E0128 16:12:46.407879 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042\": container with ID starting with e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042 not found: ID does not exist" containerID="e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.407941 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042"} err="failed to get container status \"e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042\": rpc error: code = NotFound desc = could not find container \"e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042\": container with ID starting with e558ef4d68330ca5a3284ad91d6ba2aa4ca41b5ec8f23b7318c0d9f584559042 not found: ID does not exist" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.407977 4903 scope.go:117] "RemoveContainer" containerID="4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74" Jan 28 16:12:46 crc kubenswrapper[4903]: E0128 16:12:46.408231 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74\": container with ID starting with 4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74 not found: ID does not exist" containerID="4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.408277 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74"} err="failed to get container status \"4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74\": rpc error: code = NotFound desc = could not find container \"4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74\": container with ID starting with 4d5f85c12706e33b0a043354d683034f5f064ce036bd8f30475b0980edbebc74 not found: ID does not exist" Jan 28 16:12:46 crc kubenswrapper[4903]: I0128 16:12:46.422647 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9545d344-df18-44d5-9379-98c57b58345c" path="/var/lib/kubelet/pods/9545d344-df18-44d5-9379-98c57b58345c/volumes" Jan 28 16:12:51 crc kubenswrapper[4903]: I0128 16:12:51.126467 4903 scope.go:117] "RemoveContainer" containerID="632e3391107c4240634e273ea5d1f8da2c43dc4bda1903457309b1f160ab2508" Jan 28 16:12:51 crc kubenswrapper[4903]: I0128 16:12:51.166514 4903 scope.go:117] "RemoveContainer" containerID="ba9719234409e77c7d6cc555d76f304aa157ad008d6da259306237c307202308" Jan 28 16:12:51 crc kubenswrapper[4903]: I0128 16:12:51.210762 4903 scope.go:117] "RemoveContainer" containerID="6d811b9422e35f2b1a84be2e0cb79a920072e49aade0e343dd02d1459cc291c2" Jan 28 16:12:58 crc kubenswrapper[4903]: I0128 16:12:58.418219 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:12:58 crc kubenswrapper[4903]: E0128 16:12:58.418906 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:13:11 crc kubenswrapper[4903]: I0128 16:13:11.414020 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:13:11 crc kubenswrapper[4903]: E0128 16:13:11.414855 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:13:23 crc kubenswrapper[4903]: I0128 16:13:23.413177 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:13:23 crc kubenswrapper[4903]: E0128 16:13:23.413778 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:13:34 crc kubenswrapper[4903]: I0128 16:13:34.413784 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:13:34 crc kubenswrapper[4903]: E0128 16:13:34.414707 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:13:45 crc kubenswrapper[4903]: I0128 16:13:45.414229 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:13:45 crc kubenswrapper[4903]: E0128 16:13:45.415074 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:13:51 crc kubenswrapper[4903]: I0128 16:13:51.331622 4903 scope.go:117] "RemoveContainer" containerID="1745bf5b44dc2b638f573aed0bd13f0c645dc6779b31ff17507a4f55bf433cbe" Jan 28 16:13:51 crc kubenswrapper[4903]: I0128 16:13:51.358821 4903 scope.go:117] "RemoveContainer" containerID="e504a8ff9406e8c82665b294595d875daa04f16ecc7011c455d97944fbe1af52" Jan 28 16:13:51 crc kubenswrapper[4903]: I0128 16:13:51.378903 4903 scope.go:117] "RemoveContainer" containerID="ffb559a01621d570504e53e37340cccc96d1714cec7712f2a1c2850cc3db6fee" Jan 28 16:13:51 crc kubenswrapper[4903]: I0128 16:13:51.399738 4903 scope.go:117] "RemoveContainer" containerID="3d79c7fac1e05948f71a70b69d96880c808e2171372e84d17bd6b7678b6acf18" Jan 28 16:13:51 crc kubenswrapper[4903]: I0128 16:13:51.425241 4903 scope.go:117] "RemoveContainer" containerID="d809e13332af93721ef1dd254566bd94490c3996a4bf8acf4d8aef340c6f49cd" Jan 28 16:13:51 crc kubenswrapper[4903]: I0128 16:13:51.455995 4903 scope.go:117] "RemoveContainer" containerID="be6539b41f3ffef5e41bc2a35bcc5c813d4bb875f67dde5ceb756b4027a5dd69" Jan 28 16:13:51 crc kubenswrapper[4903]: I0128 16:13:51.496224 4903 scope.go:117] "RemoveContainer" containerID="da3ec781d8646476efbdb53148c4a00afd58db09510fb0c855cd6849637a2a99" Jan 28 16:13:59 crc kubenswrapper[4903]: I0128 16:13:59.413611 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:13:59 crc kubenswrapper[4903]: E0128 16:13:59.414221 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:14:10 crc kubenswrapper[4903]: I0128 16:14:10.413806 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:14:10 crc kubenswrapper[4903]: E0128 16:14:10.414861 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:14:25 crc kubenswrapper[4903]: I0128 16:14:25.413837 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:14:25 crc kubenswrapper[4903]: E0128 16:14:25.414590 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:14:39 crc kubenswrapper[4903]: I0128 16:14:39.413807 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:14:39 crc kubenswrapper[4903]: E0128 16:14:39.414826 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:14:50 crc kubenswrapper[4903]: I0128 16:14:50.418691 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:14:50 crc kubenswrapper[4903]: E0128 16:14:50.419987 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.624283 4903 scope.go:117] "RemoveContainer" containerID="6277976b066e33086b843796112a47cd3c785a0a906a6cc042e802b71a70d947" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.681154 4903 scope.go:117] "RemoveContainer" containerID="a2b24315b8f846b0c4f8ca5e92f63fee6c13fd076e3a020d8137457512b1940e" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.720194 4903 scope.go:117] "RemoveContainer" containerID="ac49defcc977e6c260d4743e9000a1e960aad0f05447b83c8d3471c0be564349" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.738473 4903 scope.go:117] "RemoveContainer" containerID="b6c753060fc37429d6df0849adc55674f2f1d9fd058720bf0ad6a6bf1803a871" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.761355 4903 scope.go:117] "RemoveContainer" containerID="6208e65787fde5e4a197f4021077ab14af4e2cfe8f6c3dac084a147e070ddc73" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.779165 4903 scope.go:117] "RemoveContainer" containerID="5a24b6f133be8724bb507b4e07921d6f2881f6d7f964099e7ffc67db065083a0" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.800110 4903 scope.go:117] "RemoveContainer" containerID="06ec5bf78244073fbc75582f2d763a541cf3c9382c354bce90fc029828ab6d27" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.817263 4903 scope.go:117] "RemoveContainer" containerID="9b72e9c8533bb484a5098b45f3eefd44f36db58f9766a8b98b45742025cd67d5" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.833881 4903 scope.go:117] "RemoveContainer" containerID="632121b9f7a66f9efff716bc37e145c7bd7773fef79d06a0a19738ee1bc8f049" Jan 28 16:14:51 crc kubenswrapper[4903]: I0128 16:14:51.861201 4903 scope.go:117] "RemoveContainer" containerID="b0255b57a675117144280410a3cc8cc8f9253a8332aa44feaff2f186b1d47758" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.153053 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c"] Jan 28 16:15:00 crc kubenswrapper[4903]: E0128 16:15:00.155034 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.155056 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4903]: E0128 16:15:00.155067 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.155074 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4903]: E0128 16:15:00.155096 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.155102 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.155257 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9545d344-df18-44d5-9379-98c57b58345c" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.155714 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.157940 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.159020 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.174232 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c"] Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.316977 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2421664-8bc4-4ab4-b292-2d0ed0db5585-config-volume\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.317081 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2421664-8bc4-4ab4-b292-2d0ed0db5585-secret-volume\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.317135 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7l6\" (UniqueName: \"kubernetes.io/projected/a2421664-8bc4-4ab4-b292-2d0ed0db5585-kube-api-access-sf7l6\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.418611 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2421664-8bc4-4ab4-b292-2d0ed0db5585-secret-volume\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.418702 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf7l6\" (UniqueName: \"kubernetes.io/projected/a2421664-8bc4-4ab4-b292-2d0ed0db5585-kube-api-access-sf7l6\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.418813 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2421664-8bc4-4ab4-b292-2d0ed0db5585-config-volume\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.420918 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2421664-8bc4-4ab4-b292-2d0ed0db5585-config-volume\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.429122 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2421664-8bc4-4ab4-b292-2d0ed0db5585-secret-volume\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.436268 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf7l6\" (UniqueName: \"kubernetes.io/projected/a2421664-8bc4-4ab4-b292-2d0ed0db5585-kube-api-access-sf7l6\") pod \"collect-profiles-29493615-g9f6c\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.479342 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:00 crc kubenswrapper[4903]: I0128 16:15:00.954228 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c"] Jan 28 16:15:00 crc kubenswrapper[4903]: W0128 16:15:00.961086 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2421664_8bc4_4ab4_b292_2d0ed0db5585.slice/crio-463fa96be2179ca1dfc90d5b445058e92a0c770398967f65837f051949499143 WatchSource:0}: Error finding container 463fa96be2179ca1dfc90d5b445058e92a0c770398967f65837f051949499143: Status 404 returned error can't find the container with id 463fa96be2179ca1dfc90d5b445058e92a0c770398967f65837f051949499143 Jan 28 16:15:01 crc kubenswrapper[4903]: I0128 16:15:01.416609 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:15:01 crc kubenswrapper[4903]: E0128 16:15:01.416963 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:15:01 crc kubenswrapper[4903]: I0128 16:15:01.427487 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" event={"ID":"a2421664-8bc4-4ab4-b292-2d0ed0db5585","Type":"ContainerStarted","Data":"5b79179d68af474d805b94af19d67f4050788b014b902e90b5a208811690bd59"} Jan 28 16:15:01 crc kubenswrapper[4903]: I0128 16:15:01.427619 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" event={"ID":"a2421664-8bc4-4ab4-b292-2d0ed0db5585","Type":"ContainerStarted","Data":"463fa96be2179ca1dfc90d5b445058e92a0c770398967f65837f051949499143"} Jan 28 16:15:01 crc kubenswrapper[4903]: I0128 16:15:01.458639 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" podStartSLOduration=1.4585931699999999 podStartE2EDuration="1.45859317s" podCreationTimestamp="2026-01-28 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:15:01.450685164 +0000 UTC m=+1773.726656675" watchObservedRunningTime="2026-01-28 16:15:01.45859317 +0000 UTC m=+1773.734564691" Jan 28 16:15:02 crc kubenswrapper[4903]: I0128 16:15:02.445789 4903 generic.go:334] "Generic (PLEG): container finished" podID="a2421664-8bc4-4ab4-b292-2d0ed0db5585" containerID="5b79179d68af474d805b94af19d67f4050788b014b902e90b5a208811690bd59" exitCode=0 Jan 28 16:15:02 crc kubenswrapper[4903]: I0128 16:15:02.445852 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" event={"ID":"a2421664-8bc4-4ab4-b292-2d0ed0db5585","Type":"ContainerDied","Data":"5b79179d68af474d805b94af19d67f4050788b014b902e90b5a208811690bd59"} Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.700576 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.875453 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2421664-8bc4-4ab4-b292-2d0ed0db5585-secret-volume\") pod \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.875642 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2421664-8bc4-4ab4-b292-2d0ed0db5585-config-volume\") pod \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.875716 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf7l6\" (UniqueName: \"kubernetes.io/projected/a2421664-8bc4-4ab4-b292-2d0ed0db5585-kube-api-access-sf7l6\") pod \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\" (UID: \"a2421664-8bc4-4ab4-b292-2d0ed0db5585\") " Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.876980 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2421664-8bc4-4ab4-b292-2d0ed0db5585-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2421664-8bc4-4ab4-b292-2d0ed0db5585" (UID: "a2421664-8bc4-4ab4-b292-2d0ed0db5585"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.882936 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2421664-8bc4-4ab4-b292-2d0ed0db5585-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2421664-8bc4-4ab4-b292-2d0ed0db5585" (UID: "a2421664-8bc4-4ab4-b292-2d0ed0db5585"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.883459 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2421664-8bc4-4ab4-b292-2d0ed0db5585-kube-api-access-sf7l6" (OuterVolumeSpecName: "kube-api-access-sf7l6") pod "a2421664-8bc4-4ab4-b292-2d0ed0db5585" (UID: "a2421664-8bc4-4ab4-b292-2d0ed0db5585"). InnerVolumeSpecName "kube-api-access-sf7l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.977345 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2421664-8bc4-4ab4-b292-2d0ed0db5585-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.977378 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2421664-8bc4-4ab4-b292-2d0ed0db5585-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:03 crc kubenswrapper[4903]: I0128 16:15:03.977390 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf7l6\" (UniqueName: \"kubernetes.io/projected/a2421664-8bc4-4ab4-b292-2d0ed0db5585-kube-api-access-sf7l6\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4903]: I0128 16:15:04.458615 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" event={"ID":"a2421664-8bc4-4ab4-b292-2d0ed0db5585","Type":"ContainerDied","Data":"463fa96be2179ca1dfc90d5b445058e92a0c770398967f65837f051949499143"} Jan 28 16:15:04 crc kubenswrapper[4903]: I0128 16:15:04.458669 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463fa96be2179ca1dfc90d5b445058e92a0c770398967f65837f051949499143" Jan 28 16:15:04 crc kubenswrapper[4903]: I0128 16:15:04.458698 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c" Jan 28 16:15:12 crc kubenswrapper[4903]: I0128 16:15:12.413762 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:15:12 crc kubenswrapper[4903]: E0128 16:15:12.414512 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:15:27 crc kubenswrapper[4903]: I0128 16:15:27.413409 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:15:27 crc kubenswrapper[4903]: E0128 16:15:27.414156 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.470722 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wp7lq"] Jan 28 16:15:30 crc kubenswrapper[4903]: E0128 16:15:30.471308 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2421664-8bc4-4ab4-b292-2d0ed0db5585" containerName="collect-profiles" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.471330 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2421664-8bc4-4ab4-b292-2d0ed0db5585" containerName="collect-profiles" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.471483 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2421664-8bc4-4ab4-b292-2d0ed0db5585" containerName="collect-profiles" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.472458 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.485307 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wp7lq"] Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.559280 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6mc\" (UniqueName: \"kubernetes.io/projected/64e294b6-3e85-4129-96cf-17ff6156c19d-kube-api-access-mv6mc\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.559384 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-catalog-content\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.559471 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-utilities\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.660574 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-utilities\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.660646 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6mc\" (UniqueName: \"kubernetes.io/projected/64e294b6-3e85-4129-96cf-17ff6156c19d-kube-api-access-mv6mc\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.660692 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-catalog-content\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.661297 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-utilities\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.661331 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-catalog-content\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.684425 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6mc\" (UniqueName: \"kubernetes.io/projected/64e294b6-3e85-4129-96cf-17ff6156c19d-kube-api-access-mv6mc\") pod \"certified-operators-wp7lq\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:30 crc kubenswrapper[4903]: I0128 16:15:30.797718 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:31 crc kubenswrapper[4903]: I0128 16:15:31.257752 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wp7lq"] Jan 28 16:15:31 crc kubenswrapper[4903]: I0128 16:15:31.653744 4903 generic.go:334] "Generic (PLEG): container finished" podID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerID="54fc57140a5acfe8c52a992d2227c4875050660cb1116e602c3b1964758d5550" exitCode=0 Jan 28 16:15:31 crc kubenswrapper[4903]: I0128 16:15:31.653868 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp7lq" event={"ID":"64e294b6-3e85-4129-96cf-17ff6156c19d","Type":"ContainerDied","Data":"54fc57140a5acfe8c52a992d2227c4875050660cb1116e602c3b1964758d5550"} Jan 28 16:15:31 crc kubenswrapper[4903]: I0128 16:15:31.654134 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp7lq" event={"ID":"64e294b6-3e85-4129-96cf-17ff6156c19d","Type":"ContainerStarted","Data":"a6e02c8e2080ffb10862d435880fcefb086f7fb6a99a4773eae86ecae1ef0483"} Jan 28 16:15:31 crc kubenswrapper[4903]: I0128 16:15:31.655667 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.265020 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l4rfq"] Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.266433 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.282649 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4rfq"] Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.289157 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-catalog-content\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.289198 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8sp\" (UniqueName: \"kubernetes.io/projected/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-kube-api-access-4p8sp\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.289329 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-utilities\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.391076 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-utilities\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.391170 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-catalog-content\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.391204 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8sp\" (UniqueName: \"kubernetes.io/projected/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-kube-api-access-4p8sp\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.392016 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-utilities\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.392232 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-catalog-content\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.420287 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8sp\" (UniqueName: \"kubernetes.io/projected/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-kube-api-access-4p8sp\") pod \"redhat-marketplace-l4rfq\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:32 crc kubenswrapper[4903]: I0128 16:15:32.588287 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:33 crc kubenswrapper[4903]: I0128 16:15:33.064930 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4rfq"] Jan 28 16:15:33 crc kubenswrapper[4903]: I0128 16:15:33.674026 4903 generic.go:334] "Generic (PLEG): container finished" podID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerID="0fb5b71b0ec2471dac003d5cc2ec96d32735c7f7434a7336bbf4cf1d941c4320" exitCode=0 Jan 28 16:15:33 crc kubenswrapper[4903]: I0128 16:15:33.674111 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4rfq" event={"ID":"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b","Type":"ContainerDied","Data":"0fb5b71b0ec2471dac003d5cc2ec96d32735c7f7434a7336bbf4cf1d941c4320"} Jan 28 16:15:33 crc kubenswrapper[4903]: I0128 16:15:33.674166 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4rfq" event={"ID":"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b","Type":"ContainerStarted","Data":"6a2b154ea47f5090d8b74366914d03d76cd38321c5bc5dfcf639d0ab5085f42b"} Jan 28 16:15:33 crc kubenswrapper[4903]: I0128 16:15:33.676931 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp7lq" event={"ID":"64e294b6-3e85-4129-96cf-17ff6156c19d","Type":"ContainerStarted","Data":"7150ab70abbccbc4fdcf690b42bbba52a0658ca1fd70fdf803a43ada48610e4b"} Jan 28 16:15:34 crc kubenswrapper[4903]: I0128 16:15:34.688130 4903 generic.go:334] "Generic (PLEG): container finished" podID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerID="7150ab70abbccbc4fdcf690b42bbba52a0658ca1fd70fdf803a43ada48610e4b" exitCode=0 Jan 28 16:15:34 crc kubenswrapper[4903]: I0128 16:15:34.688807 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp7lq" event={"ID":"64e294b6-3e85-4129-96cf-17ff6156c19d","Type":"ContainerDied","Data":"7150ab70abbccbc4fdcf690b42bbba52a0658ca1fd70fdf803a43ada48610e4b"} Jan 28 16:15:35 crc kubenswrapper[4903]: I0128 16:15:35.698609 4903 generic.go:334] "Generic (PLEG): container finished" podID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerID="833256a887383f2a7ed1ab353c60a630698d0f4fa80dfc78cc0d919a7fb31f57" exitCode=0 Jan 28 16:15:35 crc kubenswrapper[4903]: I0128 16:15:35.698671 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4rfq" event={"ID":"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b","Type":"ContainerDied","Data":"833256a887383f2a7ed1ab353c60a630698d0f4fa80dfc78cc0d919a7fb31f57"} Jan 28 16:15:36 crc kubenswrapper[4903]: I0128 16:15:36.709476 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp7lq" event={"ID":"64e294b6-3e85-4129-96cf-17ff6156c19d","Type":"ContainerStarted","Data":"626e3d9a5ca90d8ec4f9420fbbc50312947d13dbe2f503bafbcde2606fd501de"} Jan 28 16:15:36 crc kubenswrapper[4903]: I0128 16:15:36.732941 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wp7lq" podStartSLOduration=2.858270582 podStartE2EDuration="6.732905909s" podCreationTimestamp="2026-01-28 16:15:30 +0000 UTC" firstStartedPulling="2026-01-28 16:15:31.655392346 +0000 UTC m=+1803.931363857" lastFinishedPulling="2026-01-28 16:15:35.530027673 +0000 UTC m=+1807.805999184" observedRunningTime="2026-01-28 16:15:36.72597163 +0000 UTC m=+1809.001943161" watchObservedRunningTime="2026-01-28 16:15:36.732905909 +0000 UTC m=+1809.008877420" Jan 28 16:15:37 crc kubenswrapper[4903]: I0128 16:15:37.721489 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4rfq" event={"ID":"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b","Type":"ContainerStarted","Data":"c495c55c60c1b9fa41a6f5616105fdca49829d34d42bc030c5a13ac6bda03ad9"} Jan 28 16:15:37 crc kubenswrapper[4903]: I0128 16:15:37.741051 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l4rfq" podStartSLOduration=2.612580926 podStartE2EDuration="5.74102533s" podCreationTimestamp="2026-01-28 16:15:32 +0000 UTC" firstStartedPulling="2026-01-28 16:15:33.676276625 +0000 UTC m=+1805.952248136" lastFinishedPulling="2026-01-28 16:15:36.804721029 +0000 UTC m=+1809.080692540" observedRunningTime="2026-01-28 16:15:37.739841618 +0000 UTC m=+1810.015813139" watchObservedRunningTime="2026-01-28 16:15:37.74102533 +0000 UTC m=+1810.016996871" Jan 28 16:15:39 crc kubenswrapper[4903]: I0128 16:15:39.413663 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:15:39 crc kubenswrapper[4903]: E0128 16:15:39.414475 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:15:40 crc kubenswrapper[4903]: I0128 16:15:40.798139 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:40 crc kubenswrapper[4903]: I0128 16:15:40.799232 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:40 crc kubenswrapper[4903]: I0128 16:15:40.845189 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:41 crc kubenswrapper[4903]: I0128 16:15:41.800816 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:42 crc kubenswrapper[4903]: I0128 16:15:42.456663 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wp7lq"] Jan 28 16:15:42 crc kubenswrapper[4903]: I0128 16:15:42.589755 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:42 crc kubenswrapper[4903]: I0128 16:15:42.590064 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:42 crc kubenswrapper[4903]: I0128 16:15:42.638770 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:42 crc kubenswrapper[4903]: I0128 16:15:42.822157 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:43 crc kubenswrapper[4903]: I0128 16:15:43.769452 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wp7lq" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="registry-server" containerID="cri-o://626e3d9a5ca90d8ec4f9420fbbc50312947d13dbe2f503bafbcde2606fd501de" gracePeriod=2 Jan 28 16:15:44 crc kubenswrapper[4903]: I0128 16:15:44.782856 4903 generic.go:334] "Generic (PLEG): container finished" podID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerID="626e3d9a5ca90d8ec4f9420fbbc50312947d13dbe2f503bafbcde2606fd501de" exitCode=0 Jan 28 16:15:44 crc kubenswrapper[4903]: I0128 16:15:44.782913 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp7lq" event={"ID":"64e294b6-3e85-4129-96cf-17ff6156c19d","Type":"ContainerDied","Data":"626e3d9a5ca90d8ec4f9420fbbc50312947d13dbe2f503bafbcde2606fd501de"} Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.061169 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4rfq"] Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.263768 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.379962 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-utilities\") pod \"64e294b6-3e85-4129-96cf-17ff6156c19d\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.380342 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-catalog-content\") pod \"64e294b6-3e85-4129-96cf-17ff6156c19d\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.380424 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6mc\" (UniqueName: \"kubernetes.io/projected/64e294b6-3e85-4129-96cf-17ff6156c19d-kube-api-access-mv6mc\") pod \"64e294b6-3e85-4129-96cf-17ff6156c19d\" (UID: \"64e294b6-3e85-4129-96cf-17ff6156c19d\") " Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.380997 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-utilities" (OuterVolumeSpecName: "utilities") pod "64e294b6-3e85-4129-96cf-17ff6156c19d" (UID: "64e294b6-3e85-4129-96cf-17ff6156c19d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.389032 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e294b6-3e85-4129-96cf-17ff6156c19d-kube-api-access-mv6mc" (OuterVolumeSpecName: "kube-api-access-mv6mc") pod "64e294b6-3e85-4129-96cf-17ff6156c19d" (UID: "64e294b6-3e85-4129-96cf-17ff6156c19d"). InnerVolumeSpecName "kube-api-access-mv6mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.503181 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6mc\" (UniqueName: \"kubernetes.io/projected/64e294b6-3e85-4129-96cf-17ff6156c19d-kube-api-access-mv6mc\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.503616 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.794566 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp7lq" event={"ID":"64e294b6-3e85-4129-96cf-17ff6156c19d","Type":"ContainerDied","Data":"a6e02c8e2080ffb10862d435880fcefb086f7fb6a99a4773eae86ecae1ef0483"} Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.794640 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp7lq" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.794714 4903 scope.go:117] "RemoveContainer" containerID="626e3d9a5ca90d8ec4f9420fbbc50312947d13dbe2f503bafbcde2606fd501de" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.794741 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l4rfq" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="registry-server" containerID="cri-o://c495c55c60c1b9fa41a6f5616105fdca49829d34d42bc030c5a13ac6bda03ad9" gracePeriod=2 Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.826061 4903 scope.go:117] "RemoveContainer" containerID="7150ab70abbccbc4fdcf690b42bbba52a0658ca1fd70fdf803a43ada48610e4b" Jan 28 16:15:45 crc kubenswrapper[4903]: I0128 16:15:45.849637 4903 scope.go:117] "RemoveContainer" containerID="54fc57140a5acfe8c52a992d2227c4875050660cb1116e602c3b1964758d5550" Jan 28 16:15:46 crc kubenswrapper[4903]: I0128 16:15:46.270479 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64e294b6-3e85-4129-96cf-17ff6156c19d" (UID: "64e294b6-3e85-4129-96cf-17ff6156c19d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:46 crc kubenswrapper[4903]: I0128 16:15:46.315823 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e294b6-3e85-4129-96cf-17ff6156c19d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:46 crc kubenswrapper[4903]: I0128 16:15:46.487136 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wp7lq"] Jan 28 16:15:46 crc kubenswrapper[4903]: I0128 16:15:46.493930 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wp7lq"] Jan 28 16:15:46 crc kubenswrapper[4903]: I0128 16:15:46.806885 4903 generic.go:334] "Generic (PLEG): container finished" podID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerID="c495c55c60c1b9fa41a6f5616105fdca49829d34d42bc030c5a13ac6bda03ad9" exitCode=0 Jan 28 16:15:46 crc kubenswrapper[4903]: I0128 16:15:46.806962 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4rfq" event={"ID":"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b","Type":"ContainerDied","Data":"c495c55c60c1b9fa41a6f5616105fdca49829d34d42bc030c5a13ac6bda03ad9"} Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.613054 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.733131 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-utilities\") pod \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.733274 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-catalog-content\") pod \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.733322 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p8sp\" (UniqueName: \"kubernetes.io/projected/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-kube-api-access-4p8sp\") pod \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\" (UID: \"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b\") " Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.734272 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-utilities" (OuterVolumeSpecName: "utilities") pod "1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" (UID: "1e0ed0b9-3846-451e-94e8-58f47c0e0a7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.742828 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-kube-api-access-4p8sp" (OuterVolumeSpecName: "kube-api-access-4p8sp") pod "1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" (UID: "1e0ed0b9-3846-451e-94e8-58f47c0e0a7b"). InnerVolumeSpecName "kube-api-access-4p8sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.755295 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" (UID: "1e0ed0b9-3846-451e-94e8-58f47c0e0a7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.831418 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4rfq" event={"ID":"1e0ed0b9-3846-451e-94e8-58f47c0e0a7b","Type":"ContainerDied","Data":"6a2b154ea47f5090d8b74366914d03d76cd38321c5bc5dfcf639d0ab5085f42b"} Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.831480 4903 scope.go:117] "RemoveContainer" containerID="c495c55c60c1b9fa41a6f5616105fdca49829d34d42bc030c5a13ac6bda03ad9" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.831630 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4rfq" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.836242 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.836297 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.836318 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p8sp\" (UniqueName: \"kubernetes.io/projected/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b-kube-api-access-4p8sp\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.875730 4903 scope.go:117] "RemoveContainer" containerID="833256a887383f2a7ed1ab353c60a630698d0f4fa80dfc78cc0d919a7fb31f57" Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.876679 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4rfq"] Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.883253 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4rfq"] Jan 28 16:15:47 crc kubenswrapper[4903]: I0128 16:15:47.894094 4903 scope.go:117] "RemoveContainer" containerID="0fb5b71b0ec2471dac003d5cc2ec96d32735c7f7434a7336bbf4cf1d941c4320" Jan 28 16:15:48 crc kubenswrapper[4903]: I0128 16:15:48.430031 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" path="/var/lib/kubelet/pods/1e0ed0b9-3846-451e-94e8-58f47c0e0a7b/volumes" Jan 28 16:15:48 crc kubenswrapper[4903]: I0128 16:15:48.431568 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" path="/var/lib/kubelet/pods/64e294b6-3e85-4129-96cf-17ff6156c19d/volumes" Jan 28 16:15:52 crc kubenswrapper[4903]: I0128 16:15:52.005049 4903 scope.go:117] "RemoveContainer" containerID="e3e71f83d63ae6618fa225dc48da3f3defa052af3751ca0db1ffac97bca25831" Jan 28 16:15:54 crc kubenswrapper[4903]: I0128 16:15:54.414228 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:15:54 crc kubenswrapper[4903]: E0128 16:15:54.414818 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:16:09 crc kubenswrapper[4903]: I0128 16:16:09.414422 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:16:10 crc kubenswrapper[4903]: I0128 16:16:10.007434 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"231f41aa776bb33860a1f18b916026621b1b45aa545a4a189e91820bd71f37db"} Jan 28 16:18:26 crc kubenswrapper[4903]: I0128 16:18:26.613984 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:18:26 crc kubenswrapper[4903]: I0128 16:18:26.614442 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:18:56 crc kubenswrapper[4903]: I0128 16:18:56.614244 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:18:56 crc kubenswrapper[4903]: I0128 16:18:56.614932 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.613845 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.614422 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.614471 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.615100 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"231f41aa776bb33860a1f18b916026621b1b45aa545a4a189e91820bd71f37db"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.615159 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://231f41aa776bb33860a1f18b916026621b1b45aa545a4a189e91820bd71f37db" gracePeriod=600 Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.990898 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="231f41aa776bb33860a1f18b916026621b1b45aa545a4a189e91820bd71f37db" exitCode=0 Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.990950 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"231f41aa776bb33860a1f18b916026621b1b45aa545a4a189e91820bd71f37db"} Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.991239 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571"} Jan 28 16:19:26 crc kubenswrapper[4903]: I0128 16:19:26.991258 4903 scope.go:117] "RemoveContainer" containerID="25e852f1f628abe9306ce2d0b383e872e3d2fc68d3a5d07c70d711cc759db61c" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.962784 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8mtq"] Jan 28 16:20:18 crc kubenswrapper[4903]: E0128 16:20:18.963488 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="extract-utilities" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963501 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="extract-utilities" Jan 28 16:20:18 crc kubenswrapper[4903]: E0128 16:20:18.963511 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="registry-server" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963517 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="registry-server" Jan 28 16:20:18 crc kubenswrapper[4903]: E0128 16:20:18.963556 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="extract-utilities" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963563 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="extract-utilities" Jan 28 16:20:18 crc kubenswrapper[4903]: E0128 16:20:18.963573 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="extract-content" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963579 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="extract-content" Jan 28 16:20:18 crc kubenswrapper[4903]: E0128 16:20:18.963588 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="extract-content" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963594 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="extract-content" Jan 28 16:20:18 crc kubenswrapper[4903]: E0128 16:20:18.963605 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="registry-server" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963610 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="registry-server" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963728 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0ed0b9-3846-451e-94e8-58f47c0e0a7b" containerName="registry-server" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.963827 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e294b6-3e85-4129-96cf-17ff6156c19d" containerName="registry-server" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.964874 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:18 crc kubenswrapper[4903]: I0128 16:20:18.979116 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8mtq"] Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.126745 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-utilities\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.126791 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz75q\" (UniqueName: \"kubernetes.io/projected/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-kube-api-access-cz75q\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.126863 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-catalog-content\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.228095 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-utilities\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.228162 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz75q\" (UniqueName: \"kubernetes.io/projected/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-kube-api-access-cz75q\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.228229 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-catalog-content\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.228710 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-utilities\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.228837 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-catalog-content\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.250597 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz75q\" (UniqueName: \"kubernetes.io/projected/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-kube-api-access-cz75q\") pod \"redhat-operators-k8mtq\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.286393 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:19 crc kubenswrapper[4903]: I0128 16:20:19.780312 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8mtq"] Jan 28 16:20:20 crc kubenswrapper[4903]: I0128 16:20:20.371855 4903 generic.go:334] "Generic (PLEG): container finished" podID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerID="fd5ed6b2c300771b746c63746c651cafd47bde06b1cfd979a1cb84fdba2ec29a" exitCode=0 Jan 28 16:20:20 crc kubenswrapper[4903]: I0128 16:20:20.371953 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8mtq" event={"ID":"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf","Type":"ContainerDied","Data":"fd5ed6b2c300771b746c63746c651cafd47bde06b1cfd979a1cb84fdba2ec29a"} Jan 28 16:20:20 crc kubenswrapper[4903]: I0128 16:20:20.372204 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8mtq" event={"ID":"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf","Type":"ContainerStarted","Data":"ca05e7be50af468ebf16d65f50cbef1da52206301e0241a05fc54fc45c37927c"} Jan 28 16:20:21 crc kubenswrapper[4903]: I0128 16:20:21.382125 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8mtq" event={"ID":"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf","Type":"ContainerStarted","Data":"0d44428fe5c704f89bafd6d5a1fc13de2974811920bc2a207b77451a0f2f7255"} Jan 28 16:20:22 crc kubenswrapper[4903]: I0128 16:20:22.390026 4903 generic.go:334] "Generic (PLEG): container finished" podID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerID="0d44428fe5c704f89bafd6d5a1fc13de2974811920bc2a207b77451a0f2f7255" exitCode=0 Jan 28 16:20:22 crc kubenswrapper[4903]: I0128 16:20:22.390131 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8mtq" event={"ID":"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf","Type":"ContainerDied","Data":"0d44428fe5c704f89bafd6d5a1fc13de2974811920bc2a207b77451a0f2f7255"} Jan 28 16:20:23 crc kubenswrapper[4903]: I0128 16:20:23.399499 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8mtq" event={"ID":"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf","Type":"ContainerStarted","Data":"2eb4426292e9598181375d5ef152fec77754725dadeb8b00653237df89b8c1c8"} Jan 28 16:20:23 crc kubenswrapper[4903]: I0128 16:20:23.421727 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8mtq" podStartSLOduration=2.921072436 podStartE2EDuration="5.421712835s" podCreationTimestamp="2026-01-28 16:20:18 +0000 UTC" firstStartedPulling="2026-01-28 16:20:20.37332888 +0000 UTC m=+2092.649300391" lastFinishedPulling="2026-01-28 16:20:22.873969279 +0000 UTC m=+2095.149940790" observedRunningTime="2026-01-28 16:20:23.415650139 +0000 UTC m=+2095.691621670" watchObservedRunningTime="2026-01-28 16:20:23.421712835 +0000 UTC m=+2095.697684346" Jan 28 16:20:29 crc kubenswrapper[4903]: I0128 16:20:29.287217 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:29 crc kubenswrapper[4903]: I0128 16:20:29.287894 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:29 crc kubenswrapper[4903]: I0128 16:20:29.338165 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:29 crc kubenswrapper[4903]: I0128 16:20:29.481690 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:29 crc kubenswrapper[4903]: I0128 16:20:29.578992 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8mtq"] Jan 28 16:20:31 crc kubenswrapper[4903]: I0128 16:20:31.454902 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k8mtq" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="registry-server" containerID="cri-o://2eb4426292e9598181375d5ef152fec77754725dadeb8b00653237df89b8c1c8" gracePeriod=2 Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.468755 4903 generic.go:334] "Generic (PLEG): container finished" podID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerID="2eb4426292e9598181375d5ef152fec77754725dadeb8b00653237df89b8c1c8" exitCode=0 Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.468825 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8mtq" event={"ID":"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf","Type":"ContainerDied","Data":"2eb4426292e9598181375d5ef152fec77754725dadeb8b00653237df89b8c1c8"} Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.691793 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.845748 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-utilities\") pod \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.846025 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-catalog-content\") pod \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.846158 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz75q\" (UniqueName: \"kubernetes.io/projected/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-kube-api-access-cz75q\") pod \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\" (UID: \"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf\") " Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.846826 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-utilities" (OuterVolumeSpecName: "utilities") pod "29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" (UID: "29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.854555 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-kube-api-access-cz75q" (OuterVolumeSpecName: "kube-api-access-cz75q") pod "29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" (UID: "29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf"). InnerVolumeSpecName "kube-api-access-cz75q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.947908 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.947951 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz75q\" (UniqueName: \"kubernetes.io/projected/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-kube-api-access-cz75q\") on node \"crc\" DevicePath \"\"" Jan 28 16:20:33 crc kubenswrapper[4903]: I0128 16:20:33.971112 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" (UID: "29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.049221 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.478884 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8mtq" event={"ID":"29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf","Type":"ContainerDied","Data":"ca05e7be50af468ebf16d65f50cbef1da52206301e0241a05fc54fc45c37927c"} Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.478975 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8mtq" Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.479389 4903 scope.go:117] "RemoveContainer" containerID="2eb4426292e9598181375d5ef152fec77754725dadeb8b00653237df89b8c1c8" Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.502643 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8mtq"] Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.507488 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k8mtq"] Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.510608 4903 scope.go:117] "RemoveContainer" containerID="0d44428fe5c704f89bafd6d5a1fc13de2974811920bc2a207b77451a0f2f7255" Jan 28 16:20:34 crc kubenswrapper[4903]: I0128 16:20:34.533417 4903 scope.go:117] "RemoveContainer" containerID="fd5ed6b2c300771b746c63746c651cafd47bde06b1cfd979a1cb84fdba2ec29a" Jan 28 16:20:36 crc kubenswrapper[4903]: I0128 16:20:36.422600 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" path="/var/lib/kubelet/pods/29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf/volumes" Jan 28 16:21:26 crc kubenswrapper[4903]: I0128 16:21:26.613980 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:21:26 crc kubenswrapper[4903]: I0128 16:21:26.614482 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:21:56 crc kubenswrapper[4903]: I0128 16:21:56.614032 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:21:56 crc kubenswrapper[4903]: I0128 16:21:56.615143 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:22:26 crc kubenswrapper[4903]: I0128 16:22:26.613460 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:22:26 crc kubenswrapper[4903]: I0128 16:22:26.614724 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:22:26 crc kubenswrapper[4903]: I0128 16:22:26.614797 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:22:26 crc kubenswrapper[4903]: I0128 16:22:26.615495 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:22:26 crc kubenswrapper[4903]: I0128 16:22:26.615587 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" gracePeriod=600 Jan 28 16:22:26 crc kubenswrapper[4903]: E0128 16:22:26.737917 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:22:27 crc kubenswrapper[4903]: I0128 16:22:27.415582 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" exitCode=0 Jan 28 16:22:27 crc kubenswrapper[4903]: I0128 16:22:27.415640 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571"} Jan 28 16:22:27 crc kubenswrapper[4903]: I0128 16:22:27.415677 4903 scope.go:117] "RemoveContainer" containerID="231f41aa776bb33860a1f18b916026621b1b45aa545a4a189e91820bd71f37db" Jan 28 16:22:27 crc kubenswrapper[4903]: I0128 16:22:27.416255 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:22:27 crc kubenswrapper[4903]: E0128 16:22:27.419290 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:22:42 crc kubenswrapper[4903]: I0128 16:22:42.413487 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:22:42 crc kubenswrapper[4903]: E0128 16:22:42.414385 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.050296 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hdl68"] Jan 28 16:22:54 crc kubenswrapper[4903]: E0128 16:22:54.051587 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="extract-utilities" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.051622 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="extract-utilities" Jan 28 16:22:54 crc kubenswrapper[4903]: E0128 16:22:54.051674 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="extract-content" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.051691 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="extract-content" Jan 28 16:22:54 crc kubenswrapper[4903]: E0128 16:22:54.051717 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="registry-server" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.051734 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="registry-server" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.052098 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d4b020-5fb0-4b50-b1ee-b8ec45a40fbf" containerName="registry-server" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.054213 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.070794 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdl68"] Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.134638 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98x8q\" (UniqueName: \"kubernetes.io/projected/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-kube-api-access-98x8q\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.134726 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-utilities\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.134956 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-catalog-content\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.236816 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-utilities\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.236910 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-catalog-content\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.237028 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98x8q\" (UniqueName: \"kubernetes.io/projected/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-kube-api-access-98x8q\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.237330 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-utilities\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.237435 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-catalog-content\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.260600 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98x8q\" (UniqueName: \"kubernetes.io/projected/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-kube-api-access-98x8q\") pod \"community-operators-hdl68\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.385593 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:22:54 crc kubenswrapper[4903]: I0128 16:22:54.867187 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hdl68"] Jan 28 16:22:55 crc kubenswrapper[4903]: I0128 16:22:55.637721 4903 generic.go:334] "Generic (PLEG): container finished" podID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerID="fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a" exitCode=0 Jan 28 16:22:55 crc kubenswrapper[4903]: I0128 16:22:55.637768 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdl68" event={"ID":"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2","Type":"ContainerDied","Data":"fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a"} Jan 28 16:22:55 crc kubenswrapper[4903]: I0128 16:22:55.637799 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdl68" event={"ID":"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2","Type":"ContainerStarted","Data":"a18decbda2d6c7435ff6acba2626f4b20237c513b6b849b2ef3a3578b7ac9c1b"} Jan 28 16:22:55 crc kubenswrapper[4903]: I0128 16:22:55.640166 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:22:56 crc kubenswrapper[4903]: I0128 16:22:56.413216 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:22:56 crc kubenswrapper[4903]: E0128 16:22:56.413992 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:22:56 crc kubenswrapper[4903]: I0128 16:22:56.646097 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdl68" event={"ID":"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2","Type":"ContainerStarted","Data":"e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c"} Jan 28 16:22:57 crc kubenswrapper[4903]: I0128 16:22:57.658159 4903 generic.go:334] "Generic (PLEG): container finished" podID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerID="e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c" exitCode=0 Jan 28 16:22:57 crc kubenswrapper[4903]: I0128 16:22:57.658232 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdl68" event={"ID":"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2","Type":"ContainerDied","Data":"e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c"} Jan 28 16:22:58 crc kubenswrapper[4903]: I0128 16:22:58.669279 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdl68" event={"ID":"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2","Type":"ContainerStarted","Data":"1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528"} Jan 28 16:22:58 crc kubenswrapper[4903]: I0128 16:22:58.687692 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hdl68" podStartSLOduration=2.273555483 podStartE2EDuration="4.687661422s" podCreationTimestamp="2026-01-28 16:22:54 +0000 UTC" firstStartedPulling="2026-01-28 16:22:55.639939975 +0000 UTC m=+2247.915911476" lastFinishedPulling="2026-01-28 16:22:58.054045894 +0000 UTC m=+2250.330017415" observedRunningTime="2026-01-28 16:22:58.686046118 +0000 UTC m=+2250.962017629" watchObservedRunningTime="2026-01-28 16:22:58.687661422 +0000 UTC m=+2250.963632933" Jan 28 16:23:04 crc kubenswrapper[4903]: I0128 16:23:04.386297 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:23:04 crc kubenswrapper[4903]: I0128 16:23:04.387832 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:23:04 crc kubenswrapper[4903]: I0128 16:23:04.441264 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:23:04 crc kubenswrapper[4903]: I0128 16:23:04.765300 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:23:04 crc kubenswrapper[4903]: I0128 16:23:04.809638 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdl68"] Jan 28 16:23:06 crc kubenswrapper[4903]: I0128 16:23:06.736877 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hdl68" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="registry-server" containerID="cri-o://1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528" gracePeriod=2 Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.203744 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.365906 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98x8q\" (UniqueName: \"kubernetes.io/projected/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-kube-api-access-98x8q\") pod \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.365961 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-catalog-content\") pod \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.366036 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-utilities\") pod \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\" (UID: \"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2\") " Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.367446 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-utilities" (OuterVolumeSpecName: "utilities") pod "bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" (UID: "bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.375635 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-kube-api-access-98x8q" (OuterVolumeSpecName: "kube-api-access-98x8q") pod "bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" (UID: "bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2"). InnerVolumeSpecName "kube-api-access-98x8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.468387 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98x8q\" (UniqueName: \"kubernetes.io/projected/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-kube-api-access-98x8q\") on node \"crc\" DevicePath \"\"" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.468816 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.750962 4903 generic.go:334] "Generic (PLEG): container finished" podID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerID="1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528" exitCode=0 Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.751076 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdl68" event={"ID":"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2","Type":"ContainerDied","Data":"1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528"} Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.751109 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hdl68" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.751161 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hdl68" event={"ID":"bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2","Type":"ContainerDied","Data":"a18decbda2d6c7435ff6acba2626f4b20237c513b6b849b2ef3a3578b7ac9c1b"} Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.751193 4903 scope.go:117] "RemoveContainer" containerID="1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.785634 4903 scope.go:117] "RemoveContainer" containerID="e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.812150 4903 scope.go:117] "RemoveContainer" containerID="fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.847788 4903 scope.go:117] "RemoveContainer" containerID="1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528" Jan 28 16:23:07 crc kubenswrapper[4903]: E0128 16:23:07.848516 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528\": container with ID starting with 1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528 not found: ID does not exist" containerID="1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.848584 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528"} err="failed to get container status \"1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528\": rpc error: code = NotFound desc = could not find container \"1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528\": container with ID starting with 1e08dfa3089da778907d0992e61bbb25f6db66303e5800bf84a89e7a44144528 not found: ID does not exist" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.848615 4903 scope.go:117] "RemoveContainer" containerID="e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c" Jan 28 16:23:07 crc kubenswrapper[4903]: E0128 16:23:07.849426 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c\": container with ID starting with e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c not found: ID does not exist" containerID="e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.849553 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c"} err="failed to get container status \"e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c\": rpc error: code = NotFound desc = could not find container \"e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c\": container with ID starting with e611d7dd8132a11983b595f37eb80df2c79db4a5f1b0facfaa8c9a60d61f0f7c not found: ID does not exist" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.849613 4903 scope.go:117] "RemoveContainer" containerID="fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a" Jan 28 16:23:07 crc kubenswrapper[4903]: E0128 16:23:07.850141 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a\": container with ID starting with fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a not found: ID does not exist" containerID="fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.850214 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a"} err="failed to get container status \"fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a\": rpc error: code = NotFound desc = could not find container \"fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a\": container with ID starting with fcd53c4426dfb8886e87ef4271702b25abe5435a8e932c7c2cd642804794e85a not found: ID does not exist" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.970643 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" (UID: "bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:23:07 crc kubenswrapper[4903]: I0128 16:23:07.976458 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:23:08 crc kubenswrapper[4903]: I0128 16:23:08.093578 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hdl68"] Jan 28 16:23:08 crc kubenswrapper[4903]: I0128 16:23:08.100598 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hdl68"] Jan 28 16:23:08 crc kubenswrapper[4903]: I0128 16:23:08.419721 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:23:08 crc kubenswrapper[4903]: E0128 16:23:08.420069 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:23:08 crc kubenswrapper[4903]: I0128 16:23:08.423390 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" path="/var/lib/kubelet/pods/bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2/volumes" Jan 28 16:23:22 crc kubenswrapper[4903]: I0128 16:23:22.412975 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:23:22 crc kubenswrapper[4903]: E0128 16:23:22.413779 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:23:35 crc kubenswrapper[4903]: I0128 16:23:35.413491 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:23:35 crc kubenswrapper[4903]: E0128 16:23:35.414389 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:23:49 crc kubenswrapper[4903]: I0128 16:23:49.413154 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:23:49 crc kubenswrapper[4903]: E0128 16:23:49.413868 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:24:01 crc kubenswrapper[4903]: I0128 16:24:01.413206 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:24:01 crc kubenswrapper[4903]: E0128 16:24:01.414602 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:24:13 crc kubenswrapper[4903]: I0128 16:24:13.413786 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:24:13 crc kubenswrapper[4903]: E0128 16:24:13.414441 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:24:24 crc kubenswrapper[4903]: I0128 16:24:24.414095 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:24:24 crc kubenswrapper[4903]: E0128 16:24:24.415258 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:24:36 crc kubenswrapper[4903]: I0128 16:24:36.414610 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:24:36 crc kubenswrapper[4903]: E0128 16:24:36.415519 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:24:50 crc kubenswrapper[4903]: I0128 16:24:50.413439 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:24:50 crc kubenswrapper[4903]: E0128 16:24:50.414201 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:25:01 crc kubenswrapper[4903]: I0128 16:25:01.412955 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:25:01 crc kubenswrapper[4903]: E0128 16:25:01.413703 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:25:16 crc kubenswrapper[4903]: I0128 16:25:16.413546 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:25:16 crc kubenswrapper[4903]: E0128 16:25:16.414396 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:25:28 crc kubenswrapper[4903]: I0128 16:25:28.416862 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:25:28 crc kubenswrapper[4903]: E0128 16:25:28.417842 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.436312 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9j9zw"] Jan 28 16:25:36 crc kubenswrapper[4903]: E0128 16:25:36.437100 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="registry-server" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.437111 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="registry-server" Jan 28 16:25:36 crc kubenswrapper[4903]: E0128 16:25:36.437133 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="extract-utilities" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.437139 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="extract-utilities" Jan 28 16:25:36 crc kubenswrapper[4903]: E0128 16:25:36.437150 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="extract-content" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.437158 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="extract-content" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.437271 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3ac4b6-f827-47bb-a4d5-94b4ef1177c2" containerName="registry-server" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.438318 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.454895 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9j9zw"] Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.506484 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8htj\" (UniqueName: \"kubernetes.io/projected/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-kube-api-access-s8htj\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.506568 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-utilities\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.506635 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-catalog-content\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.608467 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-catalog-content\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.608624 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8htj\" (UniqueName: \"kubernetes.io/projected/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-kube-api-access-s8htj\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.608659 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-utilities\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.609039 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-catalog-content\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.609071 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-utilities\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.665660 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8htj\" (UniqueName: \"kubernetes.io/projected/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-kube-api-access-s8htj\") pod \"certified-operators-9j9zw\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:36 crc kubenswrapper[4903]: I0128 16:25:36.765674 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:37 crc kubenswrapper[4903]: I0128 16:25:37.234283 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9j9zw"] Jan 28 16:25:37 crc kubenswrapper[4903]: I0128 16:25:37.505161 4903 generic.go:334] "Generic (PLEG): container finished" podID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerID="fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88" exitCode=0 Jan 28 16:25:37 crc kubenswrapper[4903]: I0128 16:25:37.505210 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9j9zw" event={"ID":"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4","Type":"ContainerDied","Data":"fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88"} Jan 28 16:25:37 crc kubenswrapper[4903]: I0128 16:25:37.505506 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9j9zw" event={"ID":"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4","Type":"ContainerStarted","Data":"fbf02022ab037d40ff270c4217cfa4ed304e091bb0f32a7626f501179e833e6e"} Jan 28 16:25:38 crc kubenswrapper[4903]: I0128 16:25:38.517566 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9j9zw" event={"ID":"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4","Type":"ContainerStarted","Data":"872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048"} Jan 28 16:25:39 crc kubenswrapper[4903]: I0128 16:25:39.413941 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:25:39 crc kubenswrapper[4903]: E0128 16:25:39.414169 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:25:39 crc kubenswrapper[4903]: I0128 16:25:39.525736 4903 generic.go:334] "Generic (PLEG): container finished" podID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerID="872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048" exitCode=0 Jan 28 16:25:39 crc kubenswrapper[4903]: I0128 16:25:39.525795 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9j9zw" event={"ID":"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4","Type":"ContainerDied","Data":"872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048"} Jan 28 16:25:40 crc kubenswrapper[4903]: I0128 16:25:40.533554 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9j9zw" event={"ID":"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4","Type":"ContainerStarted","Data":"18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726"} Jan 28 16:25:40 crc kubenswrapper[4903]: I0128 16:25:40.553799 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9j9zw" podStartSLOduration=2.129892513 podStartE2EDuration="4.553781317s" podCreationTimestamp="2026-01-28 16:25:36 +0000 UTC" firstStartedPulling="2026-01-28 16:25:37.506693212 +0000 UTC m=+2409.782664723" lastFinishedPulling="2026-01-28 16:25:39.930582006 +0000 UTC m=+2412.206553527" observedRunningTime="2026-01-28 16:25:40.550881518 +0000 UTC m=+2412.826853029" watchObservedRunningTime="2026-01-28 16:25:40.553781317 +0000 UTC m=+2412.829752828" Jan 28 16:25:46 crc kubenswrapper[4903]: I0128 16:25:46.766050 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:46 crc kubenswrapper[4903]: I0128 16:25:46.766682 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:46 crc kubenswrapper[4903]: I0128 16:25:46.817077 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:47 crc kubenswrapper[4903]: I0128 16:25:47.630349 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:47 crc kubenswrapper[4903]: I0128 16:25:47.680619 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9j9zw"] Jan 28 16:25:49 crc kubenswrapper[4903]: I0128 16:25:49.603024 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9j9zw" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="registry-server" containerID="cri-o://18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726" gracePeriod=2 Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.462684 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.606581 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-catalog-content\") pod \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.606781 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8htj\" (UniqueName: \"kubernetes.io/projected/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-kube-api-access-s8htj\") pod \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.606818 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-utilities\") pod \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\" (UID: \"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4\") " Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.609443 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-utilities" (OuterVolumeSpecName: "utilities") pod "1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" (UID: "1f33adcf-9bde-4d02-a552-aa85fa6d8ce4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.616718 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-kube-api-access-s8htj" (OuterVolumeSpecName: "kube-api-access-s8htj") pod "1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" (UID: "1f33adcf-9bde-4d02-a552-aa85fa6d8ce4"). InnerVolumeSpecName "kube-api-access-s8htj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.616926 4903 generic.go:334] "Generic (PLEG): container finished" podID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerID="18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726" exitCode=0 Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.616968 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9j9zw" event={"ID":"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4","Type":"ContainerDied","Data":"18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726"} Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.617004 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9j9zw" event={"ID":"1f33adcf-9bde-4d02-a552-aa85fa6d8ce4","Type":"ContainerDied","Data":"fbf02022ab037d40ff270c4217cfa4ed304e091bb0f32a7626f501179e833e6e"} Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.617019 4903 scope.go:117] "RemoveContainer" containerID="18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.617162 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9j9zw" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.647935 4903 scope.go:117] "RemoveContainer" containerID="872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.656510 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" (UID: "1f33adcf-9bde-4d02-a552-aa85fa6d8ce4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.673665 4903 scope.go:117] "RemoveContainer" containerID="fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.692307 4903 scope.go:117] "RemoveContainer" containerID="18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726" Jan 28 16:25:50 crc kubenswrapper[4903]: E0128 16:25:50.694737 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726\": container with ID starting with 18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726 not found: ID does not exist" containerID="18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.694772 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726"} err="failed to get container status \"18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726\": rpc error: code = NotFound desc = could not find container \"18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726\": container with ID starting with 18fa16c556915ae36a2cbf711cfe4fe758deccd8710d3fcff55673b40afbf726 not found: ID does not exist" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.694793 4903 scope.go:117] "RemoveContainer" containerID="872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048" Jan 28 16:25:50 crc kubenswrapper[4903]: E0128 16:25:50.695302 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048\": container with ID starting with 872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048 not found: ID does not exist" containerID="872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.695429 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048"} err="failed to get container status \"872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048\": rpc error: code = NotFound desc = could not find container \"872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048\": container with ID starting with 872bc44c1a463582224675d759c8cf81e5e6cd81c4ad59871810cacf3fd12048 not found: ID does not exist" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.695568 4903 scope.go:117] "RemoveContainer" containerID="fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88" Jan 28 16:25:50 crc kubenswrapper[4903]: E0128 16:25:50.696021 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88\": container with ID starting with fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88 not found: ID does not exist" containerID="fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.696059 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88"} err="failed to get container status \"fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88\": rpc error: code = NotFound desc = could not find container \"fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88\": container with ID starting with fa0aae9064d4e54ce352f77a5ff7b549568ce903ba462a98dd7dac0725217b88 not found: ID does not exist" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.708772 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8htj\" (UniqueName: \"kubernetes.io/projected/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-kube-api-access-s8htj\") on node \"crc\" DevicePath \"\"" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.708800 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.708811 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.950464 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9j9zw"] Jan 28 16:25:50 crc kubenswrapper[4903]: I0128 16:25:50.962402 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9j9zw"] Jan 28 16:25:52 crc kubenswrapper[4903]: I0128 16:25:52.413512 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:25:52 crc kubenswrapper[4903]: E0128 16:25:52.413854 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:25:52 crc kubenswrapper[4903]: I0128 16:25:52.422630 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" path="/var/lib/kubelet/pods/1f33adcf-9bde-4d02-a552-aa85fa6d8ce4/volumes" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.030137 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9z2"] Jan 28 16:26:01 crc kubenswrapper[4903]: E0128 16:26:01.030961 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="extract-utilities" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.030975 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="extract-utilities" Jan 28 16:26:01 crc kubenswrapper[4903]: E0128 16:26:01.030989 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="extract-content" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.030995 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="extract-content" Jan 28 16:26:01 crc kubenswrapper[4903]: E0128 16:26:01.031021 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="registry-server" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.031031 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="registry-server" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.031152 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f33adcf-9bde-4d02-a552-aa85fa6d8ce4" containerName="registry-server" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.032229 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.040883 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9z2"] Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.143987 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-catalog-content\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.144051 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44pt7\" (UniqueName: \"kubernetes.io/projected/01e01637-7982-49db-ae60-9f3f6a4cf124-kube-api-access-44pt7\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.144885 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-utilities\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.246076 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-utilities\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.246397 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-catalog-content\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.246508 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44pt7\" (UniqueName: \"kubernetes.io/projected/01e01637-7982-49db-ae60-9f3f6a4cf124-kube-api-access-44pt7\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.246596 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-utilities\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.246882 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-catalog-content\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.272935 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44pt7\" (UniqueName: \"kubernetes.io/projected/01e01637-7982-49db-ae60-9f3f6a4cf124-kube-api-access-44pt7\") pod \"redhat-marketplace-hz9z2\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.351769 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:01 crc kubenswrapper[4903]: I0128 16:26:01.902054 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9z2"] Jan 28 16:26:02 crc kubenswrapper[4903]: I0128 16:26:02.699081 4903 generic.go:334] "Generic (PLEG): container finished" podID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerID="a98eb61b42fc2b8508513b23e68da96bbdf16fa160885520bf7fc5dde88edcff" exitCode=0 Jan 28 16:26:02 crc kubenswrapper[4903]: I0128 16:26:02.699178 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9z2" event={"ID":"01e01637-7982-49db-ae60-9f3f6a4cf124","Type":"ContainerDied","Data":"a98eb61b42fc2b8508513b23e68da96bbdf16fa160885520bf7fc5dde88edcff"} Jan 28 16:26:02 crc kubenswrapper[4903]: I0128 16:26:02.699404 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9z2" event={"ID":"01e01637-7982-49db-ae60-9f3f6a4cf124","Type":"ContainerStarted","Data":"9f0e1dfcb4c06592a3130a3f78f082c562a4d5cd85bdd91e2cc5b2fea090e1e3"} Jan 28 16:26:03 crc kubenswrapper[4903]: I0128 16:26:03.413559 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:26:03 crc kubenswrapper[4903]: E0128 16:26:03.413811 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:26:03 crc kubenswrapper[4903]: I0128 16:26:03.708160 4903 generic.go:334] "Generic (PLEG): container finished" podID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerID="9d40a6e3bbb55d352d13993f6e12900eac8b3c4d16cb59f3b792a66b3e81bd84" exitCode=0 Jan 28 16:26:03 crc kubenswrapper[4903]: I0128 16:26:03.708214 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9z2" event={"ID":"01e01637-7982-49db-ae60-9f3f6a4cf124","Type":"ContainerDied","Data":"9d40a6e3bbb55d352d13993f6e12900eac8b3c4d16cb59f3b792a66b3e81bd84"} Jan 28 16:26:04 crc kubenswrapper[4903]: I0128 16:26:04.718596 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9z2" event={"ID":"01e01637-7982-49db-ae60-9f3f6a4cf124","Type":"ContainerStarted","Data":"22d79beac63da2d0db2ebcf698db6e0adccee6ae4078958bfba60f71aba55dd8"} Jan 28 16:26:04 crc kubenswrapper[4903]: I0128 16:26:04.736405 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hz9z2" podStartSLOduration=2.287285813 podStartE2EDuration="3.736388392s" podCreationTimestamp="2026-01-28 16:26:01 +0000 UTC" firstStartedPulling="2026-01-28 16:26:02.700867886 +0000 UTC m=+2434.976839397" lastFinishedPulling="2026-01-28 16:26:04.149970465 +0000 UTC m=+2436.425941976" observedRunningTime="2026-01-28 16:26:04.735002164 +0000 UTC m=+2437.010973695" watchObservedRunningTime="2026-01-28 16:26:04.736388392 +0000 UTC m=+2437.012359923" Jan 28 16:26:11 crc kubenswrapper[4903]: I0128 16:26:11.353349 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:11 crc kubenswrapper[4903]: I0128 16:26:11.353936 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:11 crc kubenswrapper[4903]: I0128 16:26:11.400658 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:11 crc kubenswrapper[4903]: I0128 16:26:11.807244 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:15 crc kubenswrapper[4903]: I0128 16:26:15.008999 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9z2"] Jan 28 16:26:15 crc kubenswrapper[4903]: I0128 16:26:15.009889 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hz9z2" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="registry-server" containerID="cri-o://22d79beac63da2d0db2ebcf698db6e0adccee6ae4078958bfba60f71aba55dd8" gracePeriod=2 Jan 28 16:26:15 crc kubenswrapper[4903]: I0128 16:26:15.799012 4903 generic.go:334] "Generic (PLEG): container finished" podID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerID="22d79beac63da2d0db2ebcf698db6e0adccee6ae4078958bfba60f71aba55dd8" exitCode=0 Jan 28 16:26:15 crc kubenswrapper[4903]: I0128 16:26:15.799078 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9z2" event={"ID":"01e01637-7982-49db-ae60-9f3f6a4cf124","Type":"ContainerDied","Data":"22d79beac63da2d0db2ebcf698db6e0adccee6ae4078958bfba60f71aba55dd8"} Jan 28 16:26:15 crc kubenswrapper[4903]: I0128 16:26:15.902950 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.048868 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-catalog-content\") pod \"01e01637-7982-49db-ae60-9f3f6a4cf124\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.048954 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44pt7\" (UniqueName: \"kubernetes.io/projected/01e01637-7982-49db-ae60-9f3f6a4cf124-kube-api-access-44pt7\") pod \"01e01637-7982-49db-ae60-9f3f6a4cf124\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.049293 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-utilities\") pod \"01e01637-7982-49db-ae60-9f3f6a4cf124\" (UID: \"01e01637-7982-49db-ae60-9f3f6a4cf124\") " Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.050801 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-utilities" (OuterVolumeSpecName: "utilities") pod "01e01637-7982-49db-ae60-9f3f6a4cf124" (UID: "01e01637-7982-49db-ae60-9f3f6a4cf124"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.055793 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e01637-7982-49db-ae60-9f3f6a4cf124-kube-api-access-44pt7" (OuterVolumeSpecName: "kube-api-access-44pt7") pod "01e01637-7982-49db-ae60-9f3f6a4cf124" (UID: "01e01637-7982-49db-ae60-9f3f6a4cf124"). InnerVolumeSpecName "kube-api-access-44pt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.075463 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01e01637-7982-49db-ae60-9f3f6a4cf124" (UID: "01e01637-7982-49db-ae60-9f3f6a4cf124"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.150669 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.150714 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01e01637-7982-49db-ae60-9f3f6a4cf124-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.150727 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44pt7\" (UniqueName: \"kubernetes.io/projected/01e01637-7982-49db-ae60-9f3f6a4cf124-kube-api-access-44pt7\") on node \"crc\" DevicePath \"\"" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.416026 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:26:16 crc kubenswrapper[4903]: E0128 16:26:16.417247 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.807898 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz9z2" event={"ID":"01e01637-7982-49db-ae60-9f3f6a4cf124","Type":"ContainerDied","Data":"9f0e1dfcb4c06592a3130a3f78f082c562a4d5cd85bdd91e2cc5b2fea090e1e3"} Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.807954 4903 scope.go:117] "RemoveContainer" containerID="22d79beac63da2d0db2ebcf698db6e0adccee6ae4078958bfba60f71aba55dd8" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.808014 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz9z2" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.827940 4903 scope.go:117] "RemoveContainer" containerID="9d40a6e3bbb55d352d13993f6e12900eac8b3c4d16cb59f3b792a66b3e81bd84" Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.828906 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9z2"] Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.836003 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz9z2"] Jan 28 16:26:16 crc kubenswrapper[4903]: I0128 16:26:16.855026 4903 scope.go:117] "RemoveContainer" containerID="a98eb61b42fc2b8508513b23e68da96bbdf16fa160885520bf7fc5dde88edcff" Jan 28 16:26:18 crc kubenswrapper[4903]: I0128 16:26:18.423796 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" path="/var/lib/kubelet/pods/01e01637-7982-49db-ae60-9f3f6a4cf124/volumes" Jan 28 16:26:29 crc kubenswrapper[4903]: I0128 16:26:29.413482 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:26:29 crc kubenswrapper[4903]: E0128 16:26:29.415719 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:26:44 crc kubenswrapper[4903]: I0128 16:26:44.413578 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:26:44 crc kubenswrapper[4903]: E0128 16:26:44.414408 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:26:56 crc kubenswrapper[4903]: I0128 16:26:56.413676 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:26:56 crc kubenswrapper[4903]: E0128 16:26:56.414407 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:27:09 crc kubenswrapper[4903]: I0128 16:27:09.413607 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:27:09 crc kubenswrapper[4903]: E0128 16:27:09.414560 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:27:22 crc kubenswrapper[4903]: I0128 16:27:22.413437 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:27:22 crc kubenswrapper[4903]: E0128 16:27:22.414322 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:27:36 crc kubenswrapper[4903]: I0128 16:27:36.413740 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:27:37 crc kubenswrapper[4903]: I0128 16:27:37.398289 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"07c1e0c99b0ff86052a37b1673bc8a9a400253f36ddce6f5e9c449e6e63bec92"} Jan 28 16:29:56 crc kubenswrapper[4903]: I0128 16:29:56.614272 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:29:56 crc kubenswrapper[4903]: I0128 16:29:56.615718 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.145917 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms"] Jan 28 16:30:00 crc kubenswrapper[4903]: E0128 16:30:00.146573 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="extract-utilities" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.146593 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="extract-utilities" Jan 28 16:30:00 crc kubenswrapper[4903]: E0128 16:30:00.146610 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="extract-content" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.146617 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="extract-content" Jan 28 16:30:00 crc kubenswrapper[4903]: E0128 16:30:00.146630 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="registry-server" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.146637 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="registry-server" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.146831 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e01637-7982-49db-ae60-9f3f6a4cf124" containerName="registry-server" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.147427 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.150164 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.155902 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.158058 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms"] Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.260241 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-secret-volume\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.260426 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-config-volume\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.260452 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmfsh\" (UniqueName: \"kubernetes.io/projected/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-kube-api-access-wmfsh\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.362194 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-config-volume\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.362240 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmfsh\" (UniqueName: \"kubernetes.io/projected/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-kube-api-access-wmfsh\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.362282 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-secret-volume\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.363154 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-config-volume\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.378106 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-secret-volume\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.380420 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmfsh\" (UniqueName: \"kubernetes.io/projected/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-kube-api-access-wmfsh\") pod \"collect-profiles-29493630-j5kms\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.482832 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:00 crc kubenswrapper[4903]: I0128 16:30:00.891709 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms"] Jan 28 16:30:01 crc kubenswrapper[4903]: I0128 16:30:01.520492 4903 generic.go:334] "Generic (PLEG): container finished" podID="5f46c9d2-c258-49d5-84b0-61e5dd23d5af" containerID="deed94d927913b00c5c6c75e56f8e4c3e0c6802d21cc2ef40918ad468180d7a8" exitCode=0 Jan 28 16:30:01 crc kubenswrapper[4903]: I0128 16:30:01.520576 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" event={"ID":"5f46c9d2-c258-49d5-84b0-61e5dd23d5af","Type":"ContainerDied","Data":"deed94d927913b00c5c6c75e56f8e4c3e0c6802d21cc2ef40918ad468180d7a8"} Jan 28 16:30:01 crc kubenswrapper[4903]: I0128 16:30:01.520601 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" event={"ID":"5f46c9d2-c258-49d5-84b0-61e5dd23d5af","Type":"ContainerStarted","Data":"fc9606cdc0417f50f2e3507b0f7a160023b21cabbf419493671ec3a713dd3f3d"} Jan 28 16:30:02 crc kubenswrapper[4903]: I0128 16:30:02.793622 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:02 crc kubenswrapper[4903]: I0128 16:30:02.924996 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-config-volume\") pod \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " Jan 28 16:30:02 crc kubenswrapper[4903]: I0128 16:30:02.925100 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmfsh\" (UniqueName: \"kubernetes.io/projected/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-kube-api-access-wmfsh\") pod \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " Jan 28 16:30:02 crc kubenswrapper[4903]: I0128 16:30:02.925163 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-secret-volume\") pod \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\" (UID: \"5f46c9d2-c258-49d5-84b0-61e5dd23d5af\") " Jan 28 16:30:02 crc kubenswrapper[4903]: I0128 16:30:02.925804 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-config-volume" (OuterVolumeSpecName: "config-volume") pod "5f46c9d2-c258-49d5-84b0-61e5dd23d5af" (UID: "5f46c9d2-c258-49d5-84b0-61e5dd23d5af"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:30:02 crc kubenswrapper[4903]: I0128 16:30:02.931795 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-kube-api-access-wmfsh" (OuterVolumeSpecName: "kube-api-access-wmfsh") pod "5f46c9d2-c258-49d5-84b0-61e5dd23d5af" (UID: "5f46c9d2-c258-49d5-84b0-61e5dd23d5af"). InnerVolumeSpecName "kube-api-access-wmfsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:30:02 crc kubenswrapper[4903]: I0128 16:30:02.931813 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5f46c9d2-c258-49d5-84b0-61e5dd23d5af" (UID: "5f46c9d2-c258-49d5-84b0-61e5dd23d5af"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.027195 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmfsh\" (UniqueName: \"kubernetes.io/projected/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-kube-api-access-wmfsh\") on node \"crc\" DevicePath \"\"" Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.027231 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.027241 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f46c9d2-c258-49d5-84b0-61e5dd23d5af-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.535077 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" event={"ID":"5f46c9d2-c258-49d5-84b0-61e5dd23d5af","Type":"ContainerDied","Data":"fc9606cdc0417f50f2e3507b0f7a160023b21cabbf419493671ec3a713dd3f3d"} Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.535125 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9606cdc0417f50f2e3507b0f7a160023b21cabbf419493671ec3a713dd3f3d" Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.535132 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms" Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.905577 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9"] Jan 28 16:30:03 crc kubenswrapper[4903]: I0128 16:30:03.914124 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-tx6j9"] Jan 28 16:30:04 crc kubenswrapper[4903]: I0128 16:30:04.426045 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="391b7add-cc22-451b-a87a-8130bb8924cb" path="/var/lib/kubelet/pods/391b7add-cc22-451b-a87a-8130bb8924cb/volumes" Jan 28 16:30:26 crc kubenswrapper[4903]: I0128 16:30:26.614112 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:30:26 crc kubenswrapper[4903]: I0128 16:30:26.614701 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.835078 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6qhjc"] Jan 28 16:30:46 crc kubenswrapper[4903]: E0128 16:30:46.836086 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f46c9d2-c258-49d5-84b0-61e5dd23d5af" containerName="collect-profiles" Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.836108 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f46c9d2-c258-49d5-84b0-61e5dd23d5af" containerName="collect-profiles" Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.836335 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f46c9d2-c258-49d5-84b0-61e5dd23d5af" containerName="collect-profiles" Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.837726 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.844600 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qhjc"] Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.944323 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-utilities\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.944381 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nr84\" (UniqueName: \"kubernetes.io/projected/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-kube-api-access-8nr84\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:46 crc kubenswrapper[4903]: I0128 16:30:46.944432 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-catalog-content\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.046393 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-catalog-content\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.046507 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-utilities\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.046547 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nr84\" (UniqueName: \"kubernetes.io/projected/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-kube-api-access-8nr84\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.046988 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-utilities\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.046986 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-catalog-content\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.082602 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nr84\" (UniqueName: \"kubernetes.io/projected/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-kube-api-access-8nr84\") pod \"redhat-operators-6qhjc\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.156143 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:47 crc kubenswrapper[4903]: I0128 16:30:47.685610 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qhjc"] Jan 28 16:30:48 crc kubenswrapper[4903]: I0128 16:30:48.110975 4903 generic.go:334] "Generic (PLEG): container finished" podID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerID="4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3" exitCode=0 Jan 28 16:30:48 crc kubenswrapper[4903]: I0128 16:30:48.111021 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qhjc" event={"ID":"bdbb7ec1-e0d7-498c-99c8-844466bafdc9","Type":"ContainerDied","Data":"4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3"} Jan 28 16:30:48 crc kubenswrapper[4903]: I0128 16:30:48.111050 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qhjc" event={"ID":"bdbb7ec1-e0d7-498c-99c8-844466bafdc9","Type":"ContainerStarted","Data":"cc5215eb4159df6717ff44a838c7e44c136930b4c525dd5ca3548a591d89915b"} Jan 28 16:30:48 crc kubenswrapper[4903]: I0128 16:30:48.113259 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:30:50 crc kubenswrapper[4903]: I0128 16:30:50.124795 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qhjc" event={"ID":"bdbb7ec1-e0d7-498c-99c8-844466bafdc9","Type":"ContainerStarted","Data":"c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed"} Jan 28 16:30:51 crc kubenswrapper[4903]: I0128 16:30:51.132334 4903 generic.go:334] "Generic (PLEG): container finished" podID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerID="c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed" exitCode=0 Jan 28 16:30:51 crc kubenswrapper[4903]: I0128 16:30:51.132385 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qhjc" event={"ID":"bdbb7ec1-e0d7-498c-99c8-844466bafdc9","Type":"ContainerDied","Data":"c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed"} Jan 28 16:30:52 crc kubenswrapper[4903]: I0128 16:30:52.142695 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qhjc" event={"ID":"bdbb7ec1-e0d7-498c-99c8-844466bafdc9","Type":"ContainerStarted","Data":"758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704"} Jan 28 16:30:52 crc kubenswrapper[4903]: I0128 16:30:52.171312 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6qhjc" podStartSLOduration=2.698817245 podStartE2EDuration="6.171293723s" podCreationTimestamp="2026-01-28 16:30:46 +0000 UTC" firstStartedPulling="2026-01-28 16:30:48.11298572 +0000 UTC m=+2720.388957231" lastFinishedPulling="2026-01-28 16:30:51.585462198 +0000 UTC m=+2723.861433709" observedRunningTime="2026-01-28 16:30:52.165748691 +0000 UTC m=+2724.441720202" watchObservedRunningTime="2026-01-28 16:30:52.171293723 +0000 UTC m=+2724.447265234" Jan 28 16:30:52 crc kubenswrapper[4903]: I0128 16:30:52.367282 4903 scope.go:117] "RemoveContainer" containerID="d8084ad351cce3a1f6006c8d90267e8a3714a75e0e207d86b8d34f832206762e" Jan 28 16:30:56 crc kubenswrapper[4903]: I0128 16:30:56.613684 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:30:56 crc kubenswrapper[4903]: I0128 16:30:56.614200 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:30:56 crc kubenswrapper[4903]: I0128 16:30:56.614262 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:30:56 crc kubenswrapper[4903]: I0128 16:30:56.615013 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07c1e0c99b0ff86052a37b1673bc8a9a400253f36ddce6f5e9c449e6e63bec92"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:30:56 crc kubenswrapper[4903]: I0128 16:30:56.615081 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://07c1e0c99b0ff86052a37b1673bc8a9a400253f36ddce6f5e9c449e6e63bec92" gracePeriod=600 Jan 28 16:30:57 crc kubenswrapper[4903]: I0128 16:30:57.156665 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:57 crc kubenswrapper[4903]: I0128 16:30:57.157086 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:30:57 crc kubenswrapper[4903]: I0128 16:30:57.180285 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="07c1e0c99b0ff86052a37b1673bc8a9a400253f36ddce6f5e9c449e6e63bec92" exitCode=0 Jan 28 16:30:57 crc kubenswrapper[4903]: I0128 16:30:57.180331 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"07c1e0c99b0ff86052a37b1673bc8a9a400253f36ddce6f5e9c449e6e63bec92"} Jan 28 16:30:57 crc kubenswrapper[4903]: I0128 16:30:57.180702 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b"} Jan 28 16:30:57 crc kubenswrapper[4903]: I0128 16:30:57.180733 4903 scope.go:117] "RemoveContainer" containerID="5f9994af30ce611a25017bfa14bb8873475188bb1191efb7aa7beddc2eba4571" Jan 28 16:30:58 crc kubenswrapper[4903]: I0128 16:30:58.200486 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6qhjc" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="registry-server" probeResult="failure" output=< Jan 28 16:30:58 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 16:30:58 crc kubenswrapper[4903]: > Jan 28 16:31:07 crc kubenswrapper[4903]: I0128 16:31:07.221202 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:31:07 crc kubenswrapper[4903]: I0128 16:31:07.272761 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:31:07 crc kubenswrapper[4903]: I0128 16:31:07.464085 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qhjc"] Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.276087 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6qhjc" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="registry-server" containerID="cri-o://758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704" gracePeriod=2 Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.646284 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.726509 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nr84\" (UniqueName: \"kubernetes.io/projected/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-kube-api-access-8nr84\") pod \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.726721 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-catalog-content\") pod \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.726805 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-utilities\") pod \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\" (UID: \"bdbb7ec1-e0d7-498c-99c8-844466bafdc9\") " Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.727800 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-utilities" (OuterVolumeSpecName: "utilities") pod "bdbb7ec1-e0d7-498c-99c8-844466bafdc9" (UID: "bdbb7ec1-e0d7-498c-99c8-844466bafdc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.732697 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-kube-api-access-8nr84" (OuterVolumeSpecName: "kube-api-access-8nr84") pod "bdbb7ec1-e0d7-498c-99c8-844466bafdc9" (UID: "bdbb7ec1-e0d7-498c-99c8-844466bafdc9"). InnerVolumeSpecName "kube-api-access-8nr84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.828097 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.828129 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nr84\" (UniqueName: \"kubernetes.io/projected/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-kube-api-access-8nr84\") on node \"crc\" DevicePath \"\"" Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.874717 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdbb7ec1-e0d7-498c-99c8-844466bafdc9" (UID: "bdbb7ec1-e0d7-498c-99c8-844466bafdc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:31:08 crc kubenswrapper[4903]: I0128 16:31:08.929105 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdbb7ec1-e0d7-498c-99c8-844466bafdc9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.287769 4903 generic.go:334] "Generic (PLEG): container finished" podID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerID="758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704" exitCode=0 Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.287822 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qhjc" event={"ID":"bdbb7ec1-e0d7-498c-99c8-844466bafdc9","Type":"ContainerDied","Data":"758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704"} Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.287877 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qhjc" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.287904 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qhjc" event={"ID":"bdbb7ec1-e0d7-498c-99c8-844466bafdc9","Type":"ContainerDied","Data":"cc5215eb4159df6717ff44a838c7e44c136930b4c525dd5ca3548a591d89915b"} Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.287969 4903 scope.go:117] "RemoveContainer" containerID="758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.315997 4903 scope.go:117] "RemoveContainer" containerID="c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.328029 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qhjc"] Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.336681 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6qhjc"] Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.345851 4903 scope.go:117] "RemoveContainer" containerID="4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.375180 4903 scope.go:117] "RemoveContainer" containerID="758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704" Jan 28 16:31:09 crc kubenswrapper[4903]: E0128 16:31:09.375760 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704\": container with ID starting with 758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704 not found: ID does not exist" containerID="758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.375806 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704"} err="failed to get container status \"758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704\": rpc error: code = NotFound desc = could not find container \"758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704\": container with ID starting with 758327aac10939305fd994cbc310603c526f83ec1b5fbbd8c6ea6b76a9a62704 not found: ID does not exist" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.375833 4903 scope.go:117] "RemoveContainer" containerID="c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed" Jan 28 16:31:09 crc kubenswrapper[4903]: E0128 16:31:09.376272 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed\": container with ID starting with c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed not found: ID does not exist" containerID="c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.376295 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed"} err="failed to get container status \"c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed\": rpc error: code = NotFound desc = could not find container \"c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed\": container with ID starting with c28bf3998f0dc389ea928075e6eaf678ddd99a64f7b7bfc38e0df050848655ed not found: ID does not exist" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.376309 4903 scope.go:117] "RemoveContainer" containerID="4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3" Jan 28 16:31:09 crc kubenswrapper[4903]: E0128 16:31:09.376652 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3\": container with ID starting with 4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3 not found: ID does not exist" containerID="4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3" Jan 28 16:31:09 crc kubenswrapper[4903]: I0128 16:31:09.376675 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3"} err="failed to get container status \"4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3\": rpc error: code = NotFound desc = could not find container \"4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3\": container with ID starting with 4bec5b36fb9ca3741c15b08c08de091b18acf2432e68c3cb76a521dd343db7a3 not found: ID does not exist" Jan 28 16:31:10 crc kubenswrapper[4903]: I0128 16:31:10.425279 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" path="/var/lib/kubelet/pods/bdbb7ec1-e0d7-498c-99c8-844466bafdc9/volumes" Jan 28 16:32:56 crc kubenswrapper[4903]: I0128 16:32:56.613706 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:32:56 crc kubenswrapper[4903]: I0128 16:32:56.615969 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.743742 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hjql4"] Jan 28 16:33:21 crc kubenswrapper[4903]: E0128 16:33:21.745296 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="extract-content" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.745312 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="extract-content" Jan 28 16:33:21 crc kubenswrapper[4903]: E0128 16:33:21.745323 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="extract-utilities" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.745332 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="extract-utilities" Jan 28 16:33:21 crc kubenswrapper[4903]: E0128 16:33:21.745358 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="registry-server" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.745364 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="registry-server" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.745503 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbb7ec1-e0d7-498c-99c8-844466bafdc9" containerName="registry-server" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.746406 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.756839 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjql4"] Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.773755 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg45j\" (UniqueName: \"kubernetes.io/projected/c921f590-99eb-4c1b-aa11-b5a47846bf48-kube-api-access-cg45j\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.773943 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-catalog-content\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.774149 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-utilities\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.875364 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-catalog-content\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.875773 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-utilities\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.876082 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg45j\" (UniqueName: \"kubernetes.io/projected/c921f590-99eb-4c1b-aa11-b5a47846bf48-kube-api-access-cg45j\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.876227 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-catalog-content\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.877100 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-utilities\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:21 crc kubenswrapper[4903]: I0128 16:33:21.898015 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg45j\" (UniqueName: \"kubernetes.io/projected/c921f590-99eb-4c1b-aa11-b5a47846bf48-kube-api-access-cg45j\") pod \"community-operators-hjql4\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:22 crc kubenswrapper[4903]: I0128 16:33:22.085077 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:22 crc kubenswrapper[4903]: I0128 16:33:22.575544 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjql4"] Jan 28 16:33:23 crc kubenswrapper[4903]: I0128 16:33:23.309576 4903 generic.go:334] "Generic (PLEG): container finished" podID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerID="ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333" exitCode=0 Jan 28 16:33:23 crc kubenswrapper[4903]: I0128 16:33:23.309683 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjql4" event={"ID":"c921f590-99eb-4c1b-aa11-b5a47846bf48","Type":"ContainerDied","Data":"ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333"} Jan 28 16:33:23 crc kubenswrapper[4903]: I0128 16:33:23.309878 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjql4" event={"ID":"c921f590-99eb-4c1b-aa11-b5a47846bf48","Type":"ContainerStarted","Data":"64c82bf3902dd7547fb77aff12b708dbf1102c5839dfebb5ec6c141c06f20f13"} Jan 28 16:33:25 crc kubenswrapper[4903]: I0128 16:33:25.328397 4903 generic.go:334] "Generic (PLEG): container finished" podID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerID="960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880" exitCode=0 Jan 28 16:33:25 crc kubenswrapper[4903]: I0128 16:33:25.328641 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjql4" event={"ID":"c921f590-99eb-4c1b-aa11-b5a47846bf48","Type":"ContainerDied","Data":"960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880"} Jan 28 16:33:26 crc kubenswrapper[4903]: I0128 16:33:26.363285 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjql4" event={"ID":"c921f590-99eb-4c1b-aa11-b5a47846bf48","Type":"ContainerStarted","Data":"694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92"} Jan 28 16:33:26 crc kubenswrapper[4903]: I0128 16:33:26.396434 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hjql4" podStartSLOduration=2.873243931 podStartE2EDuration="5.396402792s" podCreationTimestamp="2026-01-28 16:33:21 +0000 UTC" firstStartedPulling="2026-01-28 16:33:23.312935673 +0000 UTC m=+2875.588907184" lastFinishedPulling="2026-01-28 16:33:25.836094544 +0000 UTC m=+2878.112066045" observedRunningTime="2026-01-28 16:33:26.38792922 +0000 UTC m=+2878.663900731" watchObservedRunningTime="2026-01-28 16:33:26.396402792 +0000 UTC m=+2878.672374303" Jan 28 16:33:26 crc kubenswrapper[4903]: I0128 16:33:26.613916 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:33:26 crc kubenswrapper[4903]: I0128 16:33:26.613978 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:33:32 crc kubenswrapper[4903]: I0128 16:33:32.086190 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:32 crc kubenswrapper[4903]: I0128 16:33:32.087722 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:32 crc kubenswrapper[4903]: I0128 16:33:32.135480 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:32 crc kubenswrapper[4903]: I0128 16:33:32.464144 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:32 crc kubenswrapper[4903]: I0128 16:33:32.520931 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjql4"] Jan 28 16:33:34 crc kubenswrapper[4903]: I0128 16:33:34.431680 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hjql4" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="registry-server" containerID="cri-o://694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92" gracePeriod=2 Jan 28 16:33:34 crc kubenswrapper[4903]: I0128 16:33:34.880432 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:34 crc kubenswrapper[4903]: I0128 16:33:34.943022 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg45j\" (UniqueName: \"kubernetes.io/projected/c921f590-99eb-4c1b-aa11-b5a47846bf48-kube-api-access-cg45j\") pod \"c921f590-99eb-4c1b-aa11-b5a47846bf48\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " Jan 28 16:33:34 crc kubenswrapper[4903]: I0128 16:33:34.943079 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-catalog-content\") pod \"c921f590-99eb-4c1b-aa11-b5a47846bf48\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " Jan 28 16:33:34 crc kubenswrapper[4903]: I0128 16:33:34.943167 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-utilities\") pod \"c921f590-99eb-4c1b-aa11-b5a47846bf48\" (UID: \"c921f590-99eb-4c1b-aa11-b5a47846bf48\") " Jan 28 16:33:34 crc kubenswrapper[4903]: I0128 16:33:34.944058 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-utilities" (OuterVolumeSpecName: "utilities") pod "c921f590-99eb-4c1b-aa11-b5a47846bf48" (UID: "c921f590-99eb-4c1b-aa11-b5a47846bf48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:33:34 crc kubenswrapper[4903]: I0128 16:33:34.949074 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c921f590-99eb-4c1b-aa11-b5a47846bf48-kube-api-access-cg45j" (OuterVolumeSpecName: "kube-api-access-cg45j") pod "c921f590-99eb-4c1b-aa11-b5a47846bf48" (UID: "c921f590-99eb-4c1b-aa11-b5a47846bf48"). InnerVolumeSpecName "kube-api-access-cg45j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.007205 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c921f590-99eb-4c1b-aa11-b5a47846bf48" (UID: "c921f590-99eb-4c1b-aa11-b5a47846bf48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.045428 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg45j\" (UniqueName: \"kubernetes.io/projected/c921f590-99eb-4c1b-aa11-b5a47846bf48-kube-api-access-cg45j\") on node \"crc\" DevicePath \"\"" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.045518 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.045556 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c921f590-99eb-4c1b-aa11-b5a47846bf48-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.444949 4903 generic.go:334] "Generic (PLEG): container finished" podID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerID="694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92" exitCode=0 Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.445003 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjql4" event={"ID":"c921f590-99eb-4c1b-aa11-b5a47846bf48","Type":"ContainerDied","Data":"694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92"} Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.445063 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjql4" event={"ID":"c921f590-99eb-4c1b-aa11-b5a47846bf48","Type":"ContainerDied","Data":"64c82bf3902dd7547fb77aff12b708dbf1102c5839dfebb5ec6c141c06f20f13"} Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.445084 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjql4" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.445095 4903 scope.go:117] "RemoveContainer" containerID="694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.470809 4903 scope.go:117] "RemoveContainer" containerID="960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.496155 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjql4"] Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.500255 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hjql4"] Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.501567 4903 scope.go:117] "RemoveContainer" containerID="ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.532263 4903 scope.go:117] "RemoveContainer" containerID="694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92" Jan 28 16:33:35 crc kubenswrapper[4903]: E0128 16:33:35.533566 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92\": container with ID starting with 694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92 not found: ID does not exist" containerID="694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.533632 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92"} err="failed to get container status \"694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92\": rpc error: code = NotFound desc = could not find container \"694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92\": container with ID starting with 694f39129e194c09e3177b3db6ec58e4c2c1aa0989ef0bc0c06affb7b986ba92 not found: ID does not exist" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.533679 4903 scope.go:117] "RemoveContainer" containerID="960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880" Jan 28 16:33:35 crc kubenswrapper[4903]: E0128 16:33:35.534223 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880\": container with ID starting with 960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880 not found: ID does not exist" containerID="960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.534262 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880"} err="failed to get container status \"960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880\": rpc error: code = NotFound desc = could not find container \"960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880\": container with ID starting with 960c2cabf944b009e4ee56d769224194e884fae4fb34e6f8fa52dc5a6d524880 not found: ID does not exist" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.534288 4903 scope.go:117] "RemoveContainer" containerID="ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333" Jan 28 16:33:35 crc kubenswrapper[4903]: E0128 16:33:35.534952 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333\": container with ID starting with ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333 not found: ID does not exist" containerID="ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333" Jan 28 16:33:35 crc kubenswrapper[4903]: I0128 16:33:35.535039 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333"} err="failed to get container status \"ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333\": rpc error: code = NotFound desc = could not find container \"ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333\": container with ID starting with ed4a6a9d415c23b855cef476e0533baef509901b95490528735fe5d3f515d333 not found: ID does not exist" Jan 28 16:33:36 crc kubenswrapper[4903]: I0128 16:33:36.425374 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" path="/var/lib/kubelet/pods/c921f590-99eb-4c1b-aa11-b5a47846bf48/volumes" Jan 28 16:33:56 crc kubenswrapper[4903]: I0128 16:33:56.613617 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:33:56 crc kubenswrapper[4903]: I0128 16:33:56.614138 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:33:56 crc kubenswrapper[4903]: I0128 16:33:56.614178 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:33:56 crc kubenswrapper[4903]: I0128 16:33:56.614807 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:33:56 crc kubenswrapper[4903]: I0128 16:33:56.614859 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" gracePeriod=600 Jan 28 16:33:56 crc kubenswrapper[4903]: E0128 16:33:56.805259 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:33:57 crc kubenswrapper[4903]: I0128 16:33:57.609412 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" exitCode=0 Jan 28 16:33:57 crc kubenswrapper[4903]: I0128 16:33:57.609464 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b"} Jan 28 16:33:57 crc kubenswrapper[4903]: I0128 16:33:57.609505 4903 scope.go:117] "RemoveContainer" containerID="07c1e0c99b0ff86052a37b1673bc8a9a400253f36ddce6f5e9c449e6e63bec92" Jan 28 16:33:57 crc kubenswrapper[4903]: I0128 16:33:57.610019 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:33:57 crc kubenswrapper[4903]: E0128 16:33:57.610333 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:34:11 crc kubenswrapper[4903]: I0128 16:34:11.413573 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:34:11 crc kubenswrapper[4903]: E0128 16:34:11.414975 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:34:25 crc kubenswrapper[4903]: I0128 16:34:25.413192 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:34:25 crc kubenswrapper[4903]: E0128 16:34:25.414280 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:34:38 crc kubenswrapper[4903]: I0128 16:34:38.419582 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:34:38 crc kubenswrapper[4903]: E0128 16:34:38.420418 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:34:51 crc kubenswrapper[4903]: I0128 16:34:51.413938 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:34:51 crc kubenswrapper[4903]: E0128 16:34:51.414860 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:35:04 crc kubenswrapper[4903]: I0128 16:35:04.413646 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:35:04 crc kubenswrapper[4903]: E0128 16:35:04.414440 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:35:18 crc kubenswrapper[4903]: I0128 16:35:18.419377 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:35:18 crc kubenswrapper[4903]: E0128 16:35:18.420169 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:35:33 crc kubenswrapper[4903]: I0128 16:35:33.413617 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:35:33 crc kubenswrapper[4903]: E0128 16:35:33.414214 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:35:46 crc kubenswrapper[4903]: I0128 16:35:46.413766 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:35:46 crc kubenswrapper[4903]: E0128 16:35:46.416445 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:35:57 crc kubenswrapper[4903]: I0128 16:35:57.413971 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:35:57 crc kubenswrapper[4903]: E0128 16:35:57.414885 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.735816 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w9m88"] Jan 28 16:36:03 crc kubenswrapper[4903]: E0128 16:36:03.736752 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="extract-utilities" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.736768 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="extract-utilities" Jan 28 16:36:03 crc kubenswrapper[4903]: E0128 16:36:03.736784 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="extract-content" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.736792 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="extract-content" Jan 28 16:36:03 crc kubenswrapper[4903]: E0128 16:36:03.736821 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="registry-server" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.736828 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="registry-server" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.736987 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c921f590-99eb-4c1b-aa11-b5a47846bf48" containerName="registry-server" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.738194 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.754638 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9m88"] Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.757655 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62dr\" (UniqueName: \"kubernetes.io/projected/3189eb1e-5582-43b0-94e6-c396e5de5369-kube-api-access-q62dr\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.757747 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-catalog-content\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.757951 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-utilities\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.859869 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q62dr\" (UniqueName: \"kubernetes.io/projected/3189eb1e-5582-43b0-94e6-c396e5de5369-kube-api-access-q62dr\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.859982 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-catalog-content\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.860018 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-utilities\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.860422 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-utilities\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.860522 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-catalog-content\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:03 crc kubenswrapper[4903]: I0128 16:36:03.885021 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q62dr\" (UniqueName: \"kubernetes.io/projected/3189eb1e-5582-43b0-94e6-c396e5de5369-kube-api-access-q62dr\") pod \"redhat-marketplace-w9m88\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:04 crc kubenswrapper[4903]: I0128 16:36:04.058680 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:04 crc kubenswrapper[4903]: I0128 16:36:04.326228 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9m88"] Jan 28 16:36:04 crc kubenswrapper[4903]: I0128 16:36:04.706865 4903 generic.go:334] "Generic (PLEG): container finished" podID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerID="7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7" exitCode=0 Jan 28 16:36:04 crc kubenswrapper[4903]: I0128 16:36:04.707117 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9m88" event={"ID":"3189eb1e-5582-43b0-94e6-c396e5de5369","Type":"ContainerDied","Data":"7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7"} Jan 28 16:36:04 crc kubenswrapper[4903]: I0128 16:36:04.707141 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9m88" event={"ID":"3189eb1e-5582-43b0-94e6-c396e5de5369","Type":"ContainerStarted","Data":"b9f0ad6c879833952b48504800c268c6a7d69585a317e092271c8829382cca37"} Jan 28 16:36:04 crc kubenswrapper[4903]: I0128 16:36:04.708858 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:36:05 crc kubenswrapper[4903]: I0128 16:36:05.718072 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9m88" event={"ID":"3189eb1e-5582-43b0-94e6-c396e5de5369","Type":"ContainerStarted","Data":"61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301"} Jan 28 16:36:06 crc kubenswrapper[4903]: I0128 16:36:06.726218 4903 generic.go:334] "Generic (PLEG): container finished" podID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerID="61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301" exitCode=0 Jan 28 16:36:06 crc kubenswrapper[4903]: I0128 16:36:06.726281 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9m88" event={"ID":"3189eb1e-5582-43b0-94e6-c396e5de5369","Type":"ContainerDied","Data":"61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301"} Jan 28 16:36:08 crc kubenswrapper[4903]: I0128 16:36:08.434014 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:36:08 crc kubenswrapper[4903]: E0128 16:36:08.434282 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:36:11 crc kubenswrapper[4903]: I0128 16:36:11.767083 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9m88" event={"ID":"3189eb1e-5582-43b0-94e6-c396e5de5369","Type":"ContainerStarted","Data":"3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e"} Jan 28 16:36:11 crc kubenswrapper[4903]: I0128 16:36:11.793831 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w9m88" podStartSLOduration=2.068787541 podStartE2EDuration="8.793811999s" podCreationTimestamp="2026-01-28 16:36:03 +0000 UTC" firstStartedPulling="2026-01-28 16:36:04.708642823 +0000 UTC m=+3036.984614334" lastFinishedPulling="2026-01-28 16:36:11.433667241 +0000 UTC m=+3043.709638792" observedRunningTime="2026-01-28 16:36:11.789373268 +0000 UTC m=+3044.065344779" watchObservedRunningTime="2026-01-28 16:36:11.793811999 +0000 UTC m=+3044.069783500" Jan 28 16:36:14 crc kubenswrapper[4903]: I0128 16:36:14.059139 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:14 crc kubenswrapper[4903]: I0128 16:36:14.059758 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:14 crc kubenswrapper[4903]: I0128 16:36:14.129618 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:23 crc kubenswrapper[4903]: I0128 16:36:23.413845 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:36:23 crc kubenswrapper[4903]: E0128 16:36:23.414819 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:36:24 crc kubenswrapper[4903]: I0128 16:36:24.123613 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:24 crc kubenswrapper[4903]: I0128 16:36:24.191420 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9m88"] Jan 28 16:36:24 crc kubenswrapper[4903]: I0128 16:36:24.869074 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w9m88" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="registry-server" containerID="cri-o://3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e" gracePeriod=2 Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.362702 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.382637 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q62dr\" (UniqueName: \"kubernetes.io/projected/3189eb1e-5582-43b0-94e6-c396e5de5369-kube-api-access-q62dr\") pod \"3189eb1e-5582-43b0-94e6-c396e5de5369\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.382729 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-catalog-content\") pod \"3189eb1e-5582-43b0-94e6-c396e5de5369\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.382776 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-utilities\") pod \"3189eb1e-5582-43b0-94e6-c396e5de5369\" (UID: \"3189eb1e-5582-43b0-94e6-c396e5de5369\") " Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.385295 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-utilities" (OuterVolumeSpecName: "utilities") pod "3189eb1e-5582-43b0-94e6-c396e5de5369" (UID: "3189eb1e-5582-43b0-94e6-c396e5de5369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.403764 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3189eb1e-5582-43b0-94e6-c396e5de5369-kube-api-access-q62dr" (OuterVolumeSpecName: "kube-api-access-q62dr") pod "3189eb1e-5582-43b0-94e6-c396e5de5369" (UID: "3189eb1e-5582-43b0-94e6-c396e5de5369"). InnerVolumeSpecName "kube-api-access-q62dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.432313 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3189eb1e-5582-43b0-94e6-c396e5de5369" (UID: "3189eb1e-5582-43b0-94e6-c396e5de5369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.484343 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q62dr\" (UniqueName: \"kubernetes.io/projected/3189eb1e-5582-43b0-94e6-c396e5de5369-kube-api-access-q62dr\") on node \"crc\" DevicePath \"\"" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.484384 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.484399 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3189eb1e-5582-43b0-94e6-c396e5de5369-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.877367 4903 generic.go:334] "Generic (PLEG): container finished" podID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerID="3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e" exitCode=0 Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.877419 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9m88" event={"ID":"3189eb1e-5582-43b0-94e6-c396e5de5369","Type":"ContainerDied","Data":"3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e"} Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.877430 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9m88" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.877458 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9m88" event={"ID":"3189eb1e-5582-43b0-94e6-c396e5de5369","Type":"ContainerDied","Data":"b9f0ad6c879833952b48504800c268c6a7d69585a317e092271c8829382cca37"} Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.877478 4903 scope.go:117] "RemoveContainer" containerID="3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.895380 4903 scope.go:117] "RemoveContainer" containerID="61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.915089 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9m88"] Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.922113 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9m88"] Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.931184 4903 scope.go:117] "RemoveContainer" containerID="7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.945254 4903 scope.go:117] "RemoveContainer" containerID="3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e" Jan 28 16:36:25 crc kubenswrapper[4903]: E0128 16:36:25.945555 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e\": container with ID starting with 3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e not found: ID does not exist" containerID="3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.945586 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e"} err="failed to get container status \"3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e\": rpc error: code = NotFound desc = could not find container \"3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e\": container with ID starting with 3904d588a6682162b2dc9445b867dc1744ce02124ce0e42fd9e34946b38dcc7e not found: ID does not exist" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.945608 4903 scope.go:117] "RemoveContainer" containerID="61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301" Jan 28 16:36:25 crc kubenswrapper[4903]: E0128 16:36:25.945787 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301\": container with ID starting with 61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301 not found: ID does not exist" containerID="61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.945810 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301"} err="failed to get container status \"61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301\": rpc error: code = NotFound desc = could not find container \"61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301\": container with ID starting with 61a94787db2d1627075fb0993cbff43372373da72481a6d32aaa2d2a8ed8c301 not found: ID does not exist" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.945822 4903 scope.go:117] "RemoveContainer" containerID="7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7" Jan 28 16:36:25 crc kubenswrapper[4903]: E0128 16:36:25.946001 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7\": container with ID starting with 7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7 not found: ID does not exist" containerID="7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7" Jan 28 16:36:25 crc kubenswrapper[4903]: I0128 16:36:25.946025 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7"} err="failed to get container status \"7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7\": rpc error: code = NotFound desc = could not find container \"7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7\": container with ID starting with 7c90c4adddd57f137f84e879858cc783b2e9aef70bf6d9d74a918e90beb262e7 not found: ID does not exist" Jan 28 16:36:26 crc kubenswrapper[4903]: I0128 16:36:26.439087 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" path="/var/lib/kubelet/pods/3189eb1e-5582-43b0-94e6-c396e5de5369/volumes" Jan 28 16:36:37 crc kubenswrapper[4903]: I0128 16:36:37.413951 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:36:37 crc kubenswrapper[4903]: E0128 16:36:37.414712 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:36:51 crc kubenswrapper[4903]: I0128 16:36:51.414374 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:36:51 crc kubenswrapper[4903]: E0128 16:36:51.415483 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:37:06 crc kubenswrapper[4903]: I0128 16:37:06.414821 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:37:06 crc kubenswrapper[4903]: E0128 16:37:06.415588 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:37:20 crc kubenswrapper[4903]: I0128 16:37:20.413611 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:37:20 crc kubenswrapper[4903]: E0128 16:37:20.414202 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:37:33 crc kubenswrapper[4903]: I0128 16:37:33.412851 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:37:33 crc kubenswrapper[4903]: E0128 16:37:33.413419 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:37:48 crc kubenswrapper[4903]: I0128 16:37:48.419505 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:37:48 crc kubenswrapper[4903]: E0128 16:37:48.420838 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:37:59 crc kubenswrapper[4903]: I0128 16:37:59.414458 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:37:59 crc kubenswrapper[4903]: E0128 16:37:59.415626 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:38:12 crc kubenswrapper[4903]: I0128 16:38:12.414201 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:38:12 crc kubenswrapper[4903]: E0128 16:38:12.415518 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:38:26 crc kubenswrapper[4903]: I0128 16:38:26.413450 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:38:26 crc kubenswrapper[4903]: E0128 16:38:26.414248 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:38:39 crc kubenswrapper[4903]: I0128 16:38:39.413718 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:38:39 crc kubenswrapper[4903]: E0128 16:38:39.414448 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:38:53 crc kubenswrapper[4903]: I0128 16:38:53.413431 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:38:53 crc kubenswrapper[4903]: E0128 16:38:53.414182 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:39:07 crc kubenswrapper[4903]: I0128 16:39:07.413237 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:39:08 crc kubenswrapper[4903]: I0128 16:39:08.250700 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"b7188b4c8459c8850104b4379297ad3e09c2a0a015874d89687d600cc501edad"} Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.523818 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x7mr9"] Jan 28 16:39:37 crc kubenswrapper[4903]: E0128 16:39:37.525088 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="extract-utilities" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.525111 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="extract-utilities" Jan 28 16:39:37 crc kubenswrapper[4903]: E0128 16:39:37.525136 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="extract-content" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.525147 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="extract-content" Jan 28 16:39:37 crc kubenswrapper[4903]: E0128 16:39:37.525164 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="registry-server" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.525192 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="registry-server" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.525410 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="3189eb1e-5582-43b0-94e6-c396e5de5369" containerName="registry-server" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.527055 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.542138 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x7mr9"] Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.577034 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8mf\" (UniqueName: \"kubernetes.io/projected/22cad103-fd7d-4688-bee8-a903fcde7739-kube-api-access-xs8mf\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.577089 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-utilities\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.577122 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-catalog-content\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.678076 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8mf\" (UniqueName: \"kubernetes.io/projected/22cad103-fd7d-4688-bee8-a903fcde7739-kube-api-access-xs8mf\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.678347 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-utilities\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.678441 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-catalog-content\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.678897 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-catalog-content\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.678966 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-utilities\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.697240 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8mf\" (UniqueName: \"kubernetes.io/projected/22cad103-fd7d-4688-bee8-a903fcde7739-kube-api-access-xs8mf\") pod \"certified-operators-x7mr9\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:37 crc kubenswrapper[4903]: I0128 16:39:37.854684 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:38 crc kubenswrapper[4903]: I0128 16:39:38.349366 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x7mr9"] Jan 28 16:39:38 crc kubenswrapper[4903]: I0128 16:39:38.500127 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mr9" event={"ID":"22cad103-fd7d-4688-bee8-a903fcde7739","Type":"ContainerStarted","Data":"8cba7b755824012394bece51e308294e672893b5a1aee38a4e8cad83fd658011"} Jan 28 16:39:39 crc kubenswrapper[4903]: I0128 16:39:39.513014 4903 generic.go:334] "Generic (PLEG): container finished" podID="22cad103-fd7d-4688-bee8-a903fcde7739" containerID="72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5" exitCode=0 Jan 28 16:39:39 crc kubenswrapper[4903]: I0128 16:39:39.513136 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mr9" event={"ID":"22cad103-fd7d-4688-bee8-a903fcde7739","Type":"ContainerDied","Data":"72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5"} Jan 28 16:39:40 crc kubenswrapper[4903]: I0128 16:39:40.523347 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mr9" event={"ID":"22cad103-fd7d-4688-bee8-a903fcde7739","Type":"ContainerStarted","Data":"773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4"} Jan 28 16:39:41 crc kubenswrapper[4903]: I0128 16:39:41.556576 4903 generic.go:334] "Generic (PLEG): container finished" podID="22cad103-fd7d-4688-bee8-a903fcde7739" containerID="773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4" exitCode=0 Jan 28 16:39:41 crc kubenswrapper[4903]: I0128 16:39:41.556639 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mr9" event={"ID":"22cad103-fd7d-4688-bee8-a903fcde7739","Type":"ContainerDied","Data":"773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4"} Jan 28 16:39:42 crc kubenswrapper[4903]: I0128 16:39:42.571134 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mr9" event={"ID":"22cad103-fd7d-4688-bee8-a903fcde7739","Type":"ContainerStarted","Data":"485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769"} Jan 28 16:39:42 crc kubenswrapper[4903]: I0128 16:39:42.602737 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x7mr9" podStartSLOduration=3.099705568 podStartE2EDuration="5.602706148s" podCreationTimestamp="2026-01-28 16:39:37 +0000 UTC" firstStartedPulling="2026-01-28 16:39:39.516998272 +0000 UTC m=+3251.792969823" lastFinishedPulling="2026-01-28 16:39:42.019998902 +0000 UTC m=+3254.295970403" observedRunningTime="2026-01-28 16:39:42.600656672 +0000 UTC m=+3254.876628193" watchObservedRunningTime="2026-01-28 16:39:42.602706148 +0000 UTC m=+3254.878677679" Jan 28 16:39:47 crc kubenswrapper[4903]: I0128 16:39:47.855626 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:47 crc kubenswrapper[4903]: I0128 16:39:47.856120 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:47 crc kubenswrapper[4903]: I0128 16:39:47.907501 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:48 crc kubenswrapper[4903]: I0128 16:39:48.693723 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:48 crc kubenswrapper[4903]: I0128 16:39:48.754281 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x7mr9"] Jan 28 16:39:50 crc kubenswrapper[4903]: I0128 16:39:50.638907 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x7mr9" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="registry-server" containerID="cri-o://485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769" gracePeriod=2 Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.137902 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.321318 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs8mf\" (UniqueName: \"kubernetes.io/projected/22cad103-fd7d-4688-bee8-a903fcde7739-kube-api-access-xs8mf\") pod \"22cad103-fd7d-4688-bee8-a903fcde7739\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.321416 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-catalog-content\") pod \"22cad103-fd7d-4688-bee8-a903fcde7739\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.321449 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-utilities\") pod \"22cad103-fd7d-4688-bee8-a903fcde7739\" (UID: \"22cad103-fd7d-4688-bee8-a903fcde7739\") " Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.324976 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-utilities" (OuterVolumeSpecName: "utilities") pod "22cad103-fd7d-4688-bee8-a903fcde7739" (UID: "22cad103-fd7d-4688-bee8-a903fcde7739"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.335143 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22cad103-fd7d-4688-bee8-a903fcde7739-kube-api-access-xs8mf" (OuterVolumeSpecName: "kube-api-access-xs8mf") pod "22cad103-fd7d-4688-bee8-a903fcde7739" (UID: "22cad103-fd7d-4688-bee8-a903fcde7739"). InnerVolumeSpecName "kube-api-access-xs8mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.423507 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs8mf\" (UniqueName: \"kubernetes.io/projected/22cad103-fd7d-4688-bee8-a903fcde7739-kube-api-access-xs8mf\") on node \"crc\" DevicePath \"\"" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.423631 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.649745 4903 generic.go:334] "Generic (PLEG): container finished" podID="22cad103-fd7d-4688-bee8-a903fcde7739" containerID="485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769" exitCode=0 Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.649808 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mr9" event={"ID":"22cad103-fd7d-4688-bee8-a903fcde7739","Type":"ContainerDied","Data":"485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769"} Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.649856 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7mr9" event={"ID":"22cad103-fd7d-4688-bee8-a903fcde7739","Type":"ContainerDied","Data":"8cba7b755824012394bece51e308294e672893b5a1aee38a4e8cad83fd658011"} Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.649878 4903 scope.go:117] "RemoveContainer" containerID="485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.649879 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7mr9" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.672233 4903 scope.go:117] "RemoveContainer" containerID="773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.701460 4903 scope.go:117] "RemoveContainer" containerID="72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.742225 4903 scope.go:117] "RemoveContainer" containerID="485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769" Jan 28 16:39:51 crc kubenswrapper[4903]: E0128 16:39:51.742899 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769\": container with ID starting with 485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769 not found: ID does not exist" containerID="485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.742933 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769"} err="failed to get container status \"485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769\": rpc error: code = NotFound desc = could not find container \"485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769\": container with ID starting with 485e680661215874434c07f8c132a5a8c6e4b0d0cc82a2dbe118bf52731d4769 not found: ID does not exist" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.742958 4903 scope.go:117] "RemoveContainer" containerID="773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4" Jan 28 16:39:51 crc kubenswrapper[4903]: E0128 16:39:51.743519 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4\": container with ID starting with 773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4 not found: ID does not exist" containerID="773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.743781 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4"} err="failed to get container status \"773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4\": rpc error: code = NotFound desc = could not find container \"773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4\": container with ID starting with 773cabca39f07c31ad6d6adfc9293acc4eb3ac57c68849be97d49626795835d4 not found: ID does not exist" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.743996 4903 scope.go:117] "RemoveContainer" containerID="72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5" Jan 28 16:39:51 crc kubenswrapper[4903]: E0128 16:39:51.744603 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5\": container with ID starting with 72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5 not found: ID does not exist" containerID="72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.744635 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5"} err="failed to get container status \"72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5\": rpc error: code = NotFound desc = could not find container \"72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5\": container with ID starting with 72be36a264995641ada1f4c64579073ea0d2d65d5107c6e303aa42a3298efdd5 not found: ID does not exist" Jan 28 16:39:51 crc kubenswrapper[4903]: I0128 16:39:51.950767 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22cad103-fd7d-4688-bee8-a903fcde7739" (UID: "22cad103-fd7d-4688-bee8-a903fcde7739"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:39:52 crc kubenswrapper[4903]: I0128 16:39:52.033071 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cad103-fd7d-4688-bee8-a903fcde7739-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:39:52 crc kubenswrapper[4903]: I0128 16:39:52.311818 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x7mr9"] Jan 28 16:39:52 crc kubenswrapper[4903]: I0128 16:39:52.322488 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x7mr9"] Jan 28 16:39:52 crc kubenswrapper[4903]: I0128 16:39:52.429733 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" path="/var/lib/kubelet/pods/22cad103-fd7d-4688-bee8-a903fcde7739/volumes" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.642105 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5lpnf"] Jan 28 16:40:56 crc kubenswrapper[4903]: E0128 16:40:56.643363 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="extract-utilities" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.643387 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="extract-utilities" Jan 28 16:40:56 crc kubenswrapper[4903]: E0128 16:40:56.643410 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="extract-content" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.643422 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="extract-content" Jan 28 16:40:56 crc kubenswrapper[4903]: E0128 16:40:56.643463 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="registry-server" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.643475 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="registry-server" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.643773 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="22cad103-fd7d-4688-bee8-a903fcde7739" containerName="registry-server" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.645673 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.649405 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lpnf"] Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.801468 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whkwg\" (UniqueName: \"kubernetes.io/projected/f243a481-94bf-48ca-94ab-7d11e75d222c-kube-api-access-whkwg\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.801525 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-utilities\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.801753 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-catalog-content\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.903254 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whkwg\" (UniqueName: \"kubernetes.io/projected/f243a481-94bf-48ca-94ab-7d11e75d222c-kube-api-access-whkwg\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.903326 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-utilities\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.903401 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-catalog-content\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.904032 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-catalog-content\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.904604 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-utilities\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.926955 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whkwg\" (UniqueName: \"kubernetes.io/projected/f243a481-94bf-48ca-94ab-7d11e75d222c-kube-api-access-whkwg\") pod \"redhat-operators-5lpnf\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:56 crc kubenswrapper[4903]: I0128 16:40:56.966185 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:40:57 crc kubenswrapper[4903]: I0128 16:40:57.461570 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lpnf"] Jan 28 16:40:58 crc kubenswrapper[4903]: I0128 16:40:58.281556 4903 generic.go:334] "Generic (PLEG): container finished" podID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerID="008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320" exitCode=0 Jan 28 16:40:58 crc kubenswrapper[4903]: I0128 16:40:58.281701 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lpnf" event={"ID":"f243a481-94bf-48ca-94ab-7d11e75d222c","Type":"ContainerDied","Data":"008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320"} Jan 28 16:40:58 crc kubenswrapper[4903]: I0128 16:40:58.281882 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lpnf" event={"ID":"f243a481-94bf-48ca-94ab-7d11e75d222c","Type":"ContainerStarted","Data":"3a70cc2f0a7c4c1e3aaeec13f29187828ad06d408c2db7825acd64bcb55610f5"} Jan 28 16:40:59 crc kubenswrapper[4903]: I0128 16:40:59.294066 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lpnf" event={"ID":"f243a481-94bf-48ca-94ab-7d11e75d222c","Type":"ContainerStarted","Data":"6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6"} Jan 28 16:41:00 crc kubenswrapper[4903]: I0128 16:41:00.303731 4903 generic.go:334] "Generic (PLEG): container finished" podID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerID="6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6" exitCode=0 Jan 28 16:41:00 crc kubenswrapper[4903]: I0128 16:41:00.303785 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lpnf" event={"ID":"f243a481-94bf-48ca-94ab-7d11e75d222c","Type":"ContainerDied","Data":"6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6"} Jan 28 16:41:01 crc kubenswrapper[4903]: I0128 16:41:01.312566 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lpnf" event={"ID":"f243a481-94bf-48ca-94ab-7d11e75d222c","Type":"ContainerStarted","Data":"fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5"} Jan 28 16:41:06 crc kubenswrapper[4903]: I0128 16:41:06.966857 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:41:06 crc kubenswrapper[4903]: I0128 16:41:06.967734 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:41:08 crc kubenswrapper[4903]: I0128 16:41:08.033312 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5lpnf" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="registry-server" probeResult="failure" output=< Jan 28 16:41:08 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 16:41:08 crc kubenswrapper[4903]: > Jan 28 16:41:17 crc kubenswrapper[4903]: I0128 16:41:17.009584 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:41:17 crc kubenswrapper[4903]: I0128 16:41:17.038517 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5lpnf" podStartSLOduration=18.575085934 podStartE2EDuration="21.038488726s" podCreationTimestamp="2026-01-28 16:40:56 +0000 UTC" firstStartedPulling="2026-01-28 16:40:58.283414339 +0000 UTC m=+3330.559385870" lastFinishedPulling="2026-01-28 16:41:00.746817151 +0000 UTC m=+3333.022788662" observedRunningTime="2026-01-28 16:41:01.329554309 +0000 UTC m=+3333.605525820" watchObservedRunningTime="2026-01-28 16:41:17.038488726 +0000 UTC m=+3349.314460247" Jan 28 16:41:17 crc kubenswrapper[4903]: I0128 16:41:17.057631 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:41:17 crc kubenswrapper[4903]: I0128 16:41:17.253320 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lpnf"] Jan 28 16:41:18 crc kubenswrapper[4903]: I0128 16:41:18.487174 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5lpnf" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="registry-server" containerID="cri-o://fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5" gracePeriod=2 Jan 28 16:41:18 crc kubenswrapper[4903]: I0128 16:41:18.939097 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.035080 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-catalog-content\") pod \"f243a481-94bf-48ca-94ab-7d11e75d222c\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.035336 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whkwg\" (UniqueName: \"kubernetes.io/projected/f243a481-94bf-48ca-94ab-7d11e75d222c-kube-api-access-whkwg\") pod \"f243a481-94bf-48ca-94ab-7d11e75d222c\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.035633 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-utilities\") pod \"f243a481-94bf-48ca-94ab-7d11e75d222c\" (UID: \"f243a481-94bf-48ca-94ab-7d11e75d222c\") " Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.036424 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-utilities" (OuterVolumeSpecName: "utilities") pod "f243a481-94bf-48ca-94ab-7d11e75d222c" (UID: "f243a481-94bf-48ca-94ab-7d11e75d222c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.041317 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f243a481-94bf-48ca-94ab-7d11e75d222c-kube-api-access-whkwg" (OuterVolumeSpecName: "kube-api-access-whkwg") pod "f243a481-94bf-48ca-94ab-7d11e75d222c" (UID: "f243a481-94bf-48ca-94ab-7d11e75d222c"). InnerVolumeSpecName "kube-api-access-whkwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.137402 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.137457 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whkwg\" (UniqueName: \"kubernetes.io/projected/f243a481-94bf-48ca-94ab-7d11e75d222c-kube-api-access-whkwg\") on node \"crc\" DevicePath \"\"" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.191399 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f243a481-94bf-48ca-94ab-7d11e75d222c" (UID: "f243a481-94bf-48ca-94ab-7d11e75d222c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.239569 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f243a481-94bf-48ca-94ab-7d11e75d222c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.495516 4903 generic.go:334] "Generic (PLEG): container finished" podID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerID="fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5" exitCode=0 Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.495587 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lpnf" event={"ID":"f243a481-94bf-48ca-94ab-7d11e75d222c","Type":"ContainerDied","Data":"fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5"} Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.495622 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lpnf" event={"ID":"f243a481-94bf-48ca-94ab-7d11e75d222c","Type":"ContainerDied","Data":"3a70cc2f0a7c4c1e3aaeec13f29187828ad06d408c2db7825acd64bcb55610f5"} Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.495642 4903 scope.go:117] "RemoveContainer" containerID="fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.495590 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lpnf" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.514068 4903 scope.go:117] "RemoveContainer" containerID="6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.529668 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lpnf"] Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.544627 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5lpnf"] Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.545214 4903 scope.go:117] "RemoveContainer" containerID="008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.563044 4903 scope.go:117] "RemoveContainer" containerID="fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5" Jan 28 16:41:19 crc kubenswrapper[4903]: E0128 16:41:19.563451 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5\": container with ID starting with fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5 not found: ID does not exist" containerID="fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.563497 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5"} err="failed to get container status \"fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5\": rpc error: code = NotFound desc = could not find container \"fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5\": container with ID starting with fa3e0708d5124116cf88bc8b42a66c344bdaedcbb8538b4c87d54ff285a8b3a5 not found: ID does not exist" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.563548 4903 scope.go:117] "RemoveContainer" containerID="6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6" Jan 28 16:41:19 crc kubenswrapper[4903]: E0128 16:41:19.563921 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6\": container with ID starting with 6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6 not found: ID does not exist" containerID="6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.563955 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6"} err="failed to get container status \"6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6\": rpc error: code = NotFound desc = could not find container \"6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6\": container with ID starting with 6867181d2aa0d915674025b95856ce9609d2d60a729d22ba805a295979da3ca6 not found: ID does not exist" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.563977 4903 scope.go:117] "RemoveContainer" containerID="008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320" Jan 28 16:41:19 crc kubenswrapper[4903]: E0128 16:41:19.564176 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320\": container with ID starting with 008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320 not found: ID does not exist" containerID="008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320" Jan 28 16:41:19 crc kubenswrapper[4903]: I0128 16:41:19.564201 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320"} err="failed to get container status \"008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320\": rpc error: code = NotFound desc = could not find container \"008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320\": container with ID starting with 008db0c7bd56a048f0ae880991c23b71c741814dfa5a5fad6ef46029cc82f320 not found: ID does not exist" Jan 28 16:41:20 crc kubenswrapper[4903]: I0128 16:41:20.432065 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" path="/var/lib/kubelet/pods/f243a481-94bf-48ca-94ab-7d11e75d222c/volumes" Jan 28 16:41:26 crc kubenswrapper[4903]: I0128 16:41:26.613690 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:41:26 crc kubenswrapper[4903]: I0128 16:41:26.614639 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:41:56 crc kubenswrapper[4903]: I0128 16:41:56.613612 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:41:56 crc kubenswrapper[4903]: I0128 16:41:56.614269 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:42:26 crc kubenswrapper[4903]: I0128 16:42:26.614312 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:42:26 crc kubenswrapper[4903]: I0128 16:42:26.615413 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:42:26 crc kubenswrapper[4903]: I0128 16:42:26.615506 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:42:27 crc kubenswrapper[4903]: I0128 16:42:27.042516 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7188b4c8459c8850104b4379297ad3e09c2a0a015874d89687d600cc501edad"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:42:27 crc kubenswrapper[4903]: I0128 16:42:27.042703 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://b7188b4c8459c8850104b4379297ad3e09c2a0a015874d89687d600cc501edad" gracePeriod=600 Jan 28 16:42:28 crc kubenswrapper[4903]: I0128 16:42:28.057901 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="b7188b4c8459c8850104b4379297ad3e09c2a0a015874d89687d600cc501edad" exitCode=0 Jan 28 16:42:28 crc kubenswrapper[4903]: I0128 16:42:28.057980 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"b7188b4c8459c8850104b4379297ad3e09c2a0a015874d89687d600cc501edad"} Jan 28 16:42:28 crc kubenswrapper[4903]: I0128 16:42:28.058566 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454"} Jan 28 16:42:28 crc kubenswrapper[4903]: I0128 16:42:28.058603 4903 scope.go:117] "RemoveContainer" containerID="c2744719c2d95a379baa6d742cf99f51bf53798061fe5fffd109e431c303212b" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.715514 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fn26p"] Jan 28 16:43:41 crc kubenswrapper[4903]: E0128 16:43:41.716412 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="registry-server" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.716425 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="registry-server" Jan 28 16:43:41 crc kubenswrapper[4903]: E0128 16:43:41.716442 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="extract-content" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.716448 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="extract-content" Jan 28 16:43:41 crc kubenswrapper[4903]: E0128 16:43:41.716462 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="extract-utilities" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.716469 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="extract-utilities" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.716671 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f243a481-94bf-48ca-94ab-7d11e75d222c" containerName="registry-server" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.717841 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.743198 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fn26p"] Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.863449 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-catalog-content\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.863514 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-utilities\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.863560 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5dkp\" (UniqueName: \"kubernetes.io/projected/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-kube-api-access-z5dkp\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.964384 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-catalog-content\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.964456 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-utilities\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.964485 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5dkp\" (UniqueName: \"kubernetes.io/projected/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-kube-api-access-z5dkp\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.965144 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-catalog-content\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.965265 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-utilities\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:41 crc kubenswrapper[4903]: I0128 16:43:41.986831 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5dkp\" (UniqueName: \"kubernetes.io/projected/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-kube-api-access-z5dkp\") pod \"community-operators-fn26p\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:42 crc kubenswrapper[4903]: I0128 16:43:42.036618 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:42 crc kubenswrapper[4903]: I0128 16:43:42.551851 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fn26p"] Jan 28 16:43:42 crc kubenswrapper[4903]: I0128 16:43:42.743468 4903 generic.go:334] "Generic (PLEG): container finished" podID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerID="f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532" exitCode=0 Jan 28 16:43:42 crc kubenswrapper[4903]: I0128 16:43:42.743504 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fn26p" event={"ID":"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31","Type":"ContainerDied","Data":"f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532"} Jan 28 16:43:42 crc kubenswrapper[4903]: I0128 16:43:42.743545 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fn26p" event={"ID":"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31","Type":"ContainerStarted","Data":"1f04bfd414d0cb4714ac5704fed946fb0eb08eec2408d2fbfe1924f419ce1a92"} Jan 28 16:43:42 crc kubenswrapper[4903]: I0128 16:43:42.747312 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:43:43 crc kubenswrapper[4903]: I0128 16:43:43.755379 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fn26p" event={"ID":"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31","Type":"ContainerStarted","Data":"56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa"} Jan 28 16:43:44 crc kubenswrapper[4903]: I0128 16:43:44.768778 4903 generic.go:334] "Generic (PLEG): container finished" podID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerID="56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa" exitCode=0 Jan 28 16:43:44 crc kubenswrapper[4903]: I0128 16:43:44.768913 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fn26p" event={"ID":"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31","Type":"ContainerDied","Data":"56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa"} Jan 28 16:43:45 crc kubenswrapper[4903]: I0128 16:43:45.779427 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fn26p" event={"ID":"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31","Type":"ContainerStarted","Data":"bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11"} Jan 28 16:43:45 crc kubenswrapper[4903]: I0128 16:43:45.800413 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fn26p" podStartSLOduration=2.3038659790000002 podStartE2EDuration="4.800394204s" podCreationTimestamp="2026-01-28 16:43:41 +0000 UTC" firstStartedPulling="2026-01-28 16:43:42.747099739 +0000 UTC m=+3495.023071250" lastFinishedPulling="2026-01-28 16:43:45.243627964 +0000 UTC m=+3497.519599475" observedRunningTime="2026-01-28 16:43:45.794222655 +0000 UTC m=+3498.070194186" watchObservedRunningTime="2026-01-28 16:43:45.800394204 +0000 UTC m=+3498.076365715" Jan 28 16:43:52 crc kubenswrapper[4903]: I0128 16:43:52.037228 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:52 crc kubenswrapper[4903]: I0128 16:43:52.037771 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:52 crc kubenswrapper[4903]: I0128 16:43:52.090753 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:52 crc kubenswrapper[4903]: I0128 16:43:52.904980 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:53 crc kubenswrapper[4903]: I0128 16:43:53.104728 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fn26p"] Jan 28 16:43:54 crc kubenswrapper[4903]: I0128 16:43:54.859705 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fn26p" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="registry-server" containerID="cri-o://bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11" gracePeriod=2 Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.261970 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.382886 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5dkp\" (UniqueName: \"kubernetes.io/projected/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-kube-api-access-z5dkp\") pod \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.382930 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-utilities\") pod \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.383044 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-catalog-content\") pod \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\" (UID: \"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31\") " Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.384390 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-utilities" (OuterVolumeSpecName: "utilities") pod "3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" (UID: "3e6625f8-16ef-413c-96b4-fdbbbe3f4c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.389277 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-kube-api-access-z5dkp" (OuterVolumeSpecName: "kube-api-access-z5dkp") pod "3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" (UID: "3e6625f8-16ef-413c-96b4-fdbbbe3f4c31"). InnerVolumeSpecName "kube-api-access-z5dkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.440482 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" (UID: "3e6625f8-16ef-413c-96b4-fdbbbe3f4c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.484407 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.484439 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5dkp\" (UniqueName: \"kubernetes.io/projected/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-kube-api-access-z5dkp\") on node \"crc\" DevicePath \"\"" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.484450 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.874884 4903 generic.go:334] "Generic (PLEG): container finished" podID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerID="bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11" exitCode=0 Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.874969 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fn26p" event={"ID":"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31","Type":"ContainerDied","Data":"bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11"} Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.875025 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fn26p" event={"ID":"3e6625f8-16ef-413c-96b4-fdbbbe3f4c31","Type":"ContainerDied","Data":"1f04bfd414d0cb4714ac5704fed946fb0eb08eec2408d2fbfe1924f419ce1a92"} Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.875026 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fn26p" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.875085 4903 scope.go:117] "RemoveContainer" containerID="bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.902745 4903 scope.go:117] "RemoveContainer" containerID="56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.929168 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fn26p"] Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.937723 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fn26p"] Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.942357 4903 scope.go:117] "RemoveContainer" containerID="f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.965782 4903 scope.go:117] "RemoveContainer" containerID="bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11" Jan 28 16:43:55 crc kubenswrapper[4903]: E0128 16:43:55.966304 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11\": container with ID starting with bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11 not found: ID does not exist" containerID="bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.966346 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11"} err="failed to get container status \"bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11\": rpc error: code = NotFound desc = could not find container \"bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11\": container with ID starting with bca3e32f04941235677485b3e65701b40a7095e095c03408dfc166da469eac11 not found: ID does not exist" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.966377 4903 scope.go:117] "RemoveContainer" containerID="56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa" Jan 28 16:43:55 crc kubenswrapper[4903]: E0128 16:43:55.966804 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa\": container with ID starting with 56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa not found: ID does not exist" containerID="56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.966847 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa"} err="failed to get container status \"56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa\": rpc error: code = NotFound desc = could not find container \"56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa\": container with ID starting with 56513fc7a5991f67d202bc5d9d93df74c71ce0f3d34424ca43eae33aa6d705aa not found: ID does not exist" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.966871 4903 scope.go:117] "RemoveContainer" containerID="f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532" Jan 28 16:43:55 crc kubenswrapper[4903]: E0128 16:43:55.967239 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532\": container with ID starting with f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532 not found: ID does not exist" containerID="f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532" Jan 28 16:43:55 crc kubenswrapper[4903]: I0128 16:43:55.967303 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532"} err="failed to get container status \"f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532\": rpc error: code = NotFound desc = could not find container \"f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532\": container with ID starting with f3245cbd09c2e4bdd4b422f448f75c0d42bad6b03c149b06d3f11a7100936532 not found: ID does not exist" Jan 28 16:43:56 crc kubenswrapper[4903]: I0128 16:43:56.431708 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" path="/var/lib/kubelet/pods/3e6625f8-16ef-413c-96b4-fdbbbe3f4c31/volumes" Jan 28 16:44:56 crc kubenswrapper[4903]: I0128 16:44:56.613786 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:44:56 crc kubenswrapper[4903]: I0128 16:44:56.615358 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.163151 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh"] Jan 28 16:45:00 crc kubenswrapper[4903]: E0128 16:45:00.164204 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="extract-utilities" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.164223 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="extract-utilities" Jan 28 16:45:00 crc kubenswrapper[4903]: E0128 16:45:00.164241 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="extract-content" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.164250 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="extract-content" Jan 28 16:45:00 crc kubenswrapper[4903]: E0128 16:45:00.164271 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="registry-server" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.164279 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="registry-server" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.164434 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6625f8-16ef-413c-96b4-fdbbbe3f4c31" containerName="registry-server" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.165418 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.168750 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.168954 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.178983 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh"] Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.334471 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4k9r\" (UniqueName: \"kubernetes.io/projected/df0e68f5-3463-42a7-8887-c6735d6cb2dc-kube-api-access-t4k9r\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.334831 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0e68f5-3463-42a7-8887-c6735d6cb2dc-config-volume\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.335026 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0e68f5-3463-42a7-8887-c6735d6cb2dc-secret-volume\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.436143 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0e68f5-3463-42a7-8887-c6735d6cb2dc-config-volume\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.436203 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0e68f5-3463-42a7-8887-c6735d6cb2dc-secret-volume\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.436237 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4k9r\" (UniqueName: \"kubernetes.io/projected/df0e68f5-3463-42a7-8887-c6735d6cb2dc-kube-api-access-t4k9r\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.437899 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0e68f5-3463-42a7-8887-c6735d6cb2dc-config-volume\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.465461 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0e68f5-3463-42a7-8887-c6735d6cb2dc-secret-volume\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.471450 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4k9r\" (UniqueName: \"kubernetes.io/projected/df0e68f5-3463-42a7-8887-c6735d6cb2dc-kube-api-access-t4k9r\") pod \"collect-profiles-29493645-qntzh\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.488885 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:00 crc kubenswrapper[4903]: I0128 16:45:00.734663 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh"] Jan 28 16:45:01 crc kubenswrapper[4903]: I0128 16:45:01.486626 4903 generic.go:334] "Generic (PLEG): container finished" podID="df0e68f5-3463-42a7-8887-c6735d6cb2dc" containerID="9f7af22b77c0548d184633858fe755b01b8f7467a86139bb4fc765cfdfc488a6" exitCode=0 Jan 28 16:45:01 crc kubenswrapper[4903]: I0128 16:45:01.486679 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" event={"ID":"df0e68f5-3463-42a7-8887-c6735d6cb2dc","Type":"ContainerDied","Data":"9f7af22b77c0548d184633858fe755b01b8f7467a86139bb4fc765cfdfc488a6"} Jan 28 16:45:01 crc kubenswrapper[4903]: I0128 16:45:01.486712 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" event={"ID":"df0e68f5-3463-42a7-8887-c6735d6cb2dc","Type":"ContainerStarted","Data":"fa1e5632eb54fc9523c8dd9ad3ebdea8c3f8808700d95c8ef9e24d5f03341d5b"} Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.755171 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.772656 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0e68f5-3463-42a7-8887-c6735d6cb2dc-config-volume\") pod \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.772771 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4k9r\" (UniqueName: \"kubernetes.io/projected/df0e68f5-3463-42a7-8887-c6735d6cb2dc-kube-api-access-t4k9r\") pod \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.772825 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0e68f5-3463-42a7-8887-c6735d6cb2dc-secret-volume\") pod \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\" (UID: \"df0e68f5-3463-42a7-8887-c6735d6cb2dc\") " Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.774094 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0e68f5-3463-42a7-8887-c6735d6cb2dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "df0e68f5-3463-42a7-8887-c6735d6cb2dc" (UID: "df0e68f5-3463-42a7-8887-c6735d6cb2dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.774733 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0e68f5-3463-42a7-8887-c6735d6cb2dc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.780308 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0e68f5-3463-42a7-8887-c6735d6cb2dc-kube-api-access-t4k9r" (OuterVolumeSpecName: "kube-api-access-t4k9r") pod "df0e68f5-3463-42a7-8887-c6735d6cb2dc" (UID: "df0e68f5-3463-42a7-8887-c6735d6cb2dc"). InnerVolumeSpecName "kube-api-access-t4k9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.786698 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0e68f5-3463-42a7-8887-c6735d6cb2dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df0e68f5-3463-42a7-8887-c6735d6cb2dc" (UID: "df0e68f5-3463-42a7-8887-c6735d6cb2dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.876740 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df0e68f5-3463-42a7-8887-c6735d6cb2dc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:45:02 crc kubenswrapper[4903]: I0128 16:45:02.876778 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4k9r\" (UniqueName: \"kubernetes.io/projected/df0e68f5-3463-42a7-8887-c6735d6cb2dc-kube-api-access-t4k9r\") on node \"crc\" DevicePath \"\"" Jan 28 16:45:03 crc kubenswrapper[4903]: I0128 16:45:03.506205 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" event={"ID":"df0e68f5-3463-42a7-8887-c6735d6cb2dc","Type":"ContainerDied","Data":"fa1e5632eb54fc9523c8dd9ad3ebdea8c3f8808700d95c8ef9e24d5f03341d5b"} Jan 28 16:45:03 crc kubenswrapper[4903]: I0128 16:45:03.506537 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa1e5632eb54fc9523c8dd9ad3ebdea8c3f8808700d95c8ef9e24d5f03341d5b" Jan 28 16:45:03 crc kubenswrapper[4903]: I0128 16:45:03.506280 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh" Jan 28 16:45:03 crc kubenswrapper[4903]: I0128 16:45:03.870461 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258"] Jan 28 16:45:03 crc kubenswrapper[4903]: I0128 16:45:03.878664 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-fk258"] Jan 28 16:45:04 crc kubenswrapper[4903]: I0128 16:45:04.426508 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5866143c-b9f1-4789-b270-00769269e4a1" path="/var/lib/kubelet/pods/5866143c-b9f1-4789-b270-00769269e4a1/volumes" Jan 28 16:45:26 crc kubenswrapper[4903]: I0128 16:45:26.614318 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:45:26 crc kubenswrapper[4903]: I0128 16:45:26.615203 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:45:52 crc kubenswrapper[4903]: I0128 16:45:52.741898 4903 scope.go:117] "RemoveContainer" containerID="ea52012ea53daa00e69457f556a103ef3f4f23481d34c6acdc4f501970dd5ba6" Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.614134 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.614611 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.614679 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.615665 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.615815 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" gracePeriod=600 Jan 28 16:45:56 crc kubenswrapper[4903]: E0128 16:45:56.744186 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.982495 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" exitCode=0 Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.982576 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454"} Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.982658 4903 scope.go:117] "RemoveContainer" containerID="b7188b4c8459c8850104b4379297ad3e09c2a0a015874d89687d600cc501edad" Jan 28 16:45:56 crc kubenswrapper[4903]: I0128 16:45:56.983374 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:45:56 crc kubenswrapper[4903]: E0128 16:45:56.983664 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:46:12 crc kubenswrapper[4903]: I0128 16:46:12.413663 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:46:12 crc kubenswrapper[4903]: E0128 16:46:12.414885 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:46:24 crc kubenswrapper[4903]: I0128 16:46:24.413668 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:46:24 crc kubenswrapper[4903]: E0128 16:46:24.414472 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:46:37 crc kubenswrapper[4903]: I0128 16:46:37.414838 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:46:37 crc kubenswrapper[4903]: E0128 16:46:37.416308 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:46:52 crc kubenswrapper[4903]: I0128 16:46:52.413881 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:46:52 crc kubenswrapper[4903]: E0128 16:46:52.414921 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:47:03 crc kubenswrapper[4903]: I0128 16:47:03.414394 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:47:03 crc kubenswrapper[4903]: E0128 16:47:03.415722 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.394831 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-87dpn"] Jan 28 16:47:05 crc kubenswrapper[4903]: E0128 16:47:05.395836 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0e68f5-3463-42a7-8887-c6735d6cb2dc" containerName="collect-profiles" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.395857 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0e68f5-3463-42a7-8887-c6735d6cb2dc" containerName="collect-profiles" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.396076 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0e68f5-3463-42a7-8887-c6735d6cb2dc" containerName="collect-profiles" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.397370 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.414024 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-87dpn"] Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.492352 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqpxc\" (UniqueName: \"kubernetes.io/projected/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-kube-api-access-zqpxc\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.492430 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-utilities\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.492702 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-catalog-content\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.595026 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqpxc\" (UniqueName: \"kubernetes.io/projected/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-kube-api-access-zqpxc\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.595117 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-utilities\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.595200 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-catalog-content\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.595895 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-utilities\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.595973 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-catalog-content\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.619629 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqpxc\" (UniqueName: \"kubernetes.io/projected/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-kube-api-access-zqpxc\") pod \"redhat-marketplace-87dpn\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:05 crc kubenswrapper[4903]: I0128 16:47:05.777089 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:06 crc kubenswrapper[4903]: I0128 16:47:06.236604 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-87dpn"] Jan 28 16:47:06 crc kubenswrapper[4903]: I0128 16:47:06.535476 4903 generic.go:334] "Generic (PLEG): container finished" podID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerID="1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060" exitCode=0 Jan 28 16:47:06 crc kubenswrapper[4903]: I0128 16:47:06.535545 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87dpn" event={"ID":"541c85bc-305f-4a9e-9d2b-54014aa5e0f3","Type":"ContainerDied","Data":"1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060"} Jan 28 16:47:06 crc kubenswrapper[4903]: I0128 16:47:06.535579 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87dpn" event={"ID":"541c85bc-305f-4a9e-9d2b-54014aa5e0f3","Type":"ContainerStarted","Data":"0f1ab0923a7741d16c6f9ce25c376eb1ffa8f0399702cb5676ee69e4e3a80b32"} Jan 28 16:47:07 crc kubenswrapper[4903]: I0128 16:47:07.545392 4903 generic.go:334] "Generic (PLEG): container finished" podID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerID="e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be" exitCode=0 Jan 28 16:47:07 crc kubenswrapper[4903]: I0128 16:47:07.545487 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87dpn" event={"ID":"541c85bc-305f-4a9e-9d2b-54014aa5e0f3","Type":"ContainerDied","Data":"e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be"} Jan 28 16:47:08 crc kubenswrapper[4903]: I0128 16:47:08.559597 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87dpn" event={"ID":"541c85bc-305f-4a9e-9d2b-54014aa5e0f3","Type":"ContainerStarted","Data":"73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf"} Jan 28 16:47:08 crc kubenswrapper[4903]: I0128 16:47:08.583179 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-87dpn" podStartSLOduration=2.167805634 podStartE2EDuration="3.58315746s" podCreationTimestamp="2026-01-28 16:47:05 +0000 UTC" firstStartedPulling="2026-01-28 16:47:06.539020303 +0000 UTC m=+3698.814991824" lastFinishedPulling="2026-01-28 16:47:07.954372139 +0000 UTC m=+3700.230343650" observedRunningTime="2026-01-28 16:47:08.576169921 +0000 UTC m=+3700.852141452" watchObservedRunningTime="2026-01-28 16:47:08.58315746 +0000 UTC m=+3700.859128971" Jan 28 16:47:15 crc kubenswrapper[4903]: I0128 16:47:15.777820 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:15 crc kubenswrapper[4903]: I0128 16:47:15.778357 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:15 crc kubenswrapper[4903]: I0128 16:47:15.845510 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:16 crc kubenswrapper[4903]: I0128 16:47:16.413370 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:47:16 crc kubenswrapper[4903]: E0128 16:47:16.413605 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:47:16 crc kubenswrapper[4903]: I0128 16:47:16.659900 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:16 crc kubenswrapper[4903]: I0128 16:47:16.711815 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-87dpn"] Jan 28 16:47:18 crc kubenswrapper[4903]: I0128 16:47:18.642289 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-87dpn" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="registry-server" containerID="cri-o://73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf" gracePeriod=2 Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.595310 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.654090 4903 generic.go:334] "Generic (PLEG): container finished" podID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerID="73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf" exitCode=0 Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.654158 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87dpn" event={"ID":"541c85bc-305f-4a9e-9d2b-54014aa5e0f3","Type":"ContainerDied","Data":"73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf"} Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.654260 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-87dpn" event={"ID":"541c85bc-305f-4a9e-9d2b-54014aa5e0f3","Type":"ContainerDied","Data":"0f1ab0923a7741d16c6f9ce25c376eb1ffa8f0399702cb5676ee69e4e3a80b32"} Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.654302 4903 scope.go:117] "RemoveContainer" containerID="73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.654305 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-87dpn" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.679308 4903 scope.go:117] "RemoveContainer" containerID="e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.706181 4903 scope.go:117] "RemoveContainer" containerID="1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.724156 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-utilities\") pod \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.725731 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-catalog-content\") pod \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.725797 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqpxc\" (UniqueName: \"kubernetes.io/projected/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-kube-api-access-zqpxc\") pod \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\" (UID: \"541c85bc-305f-4a9e-9d2b-54014aa5e0f3\") " Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.726062 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-utilities" (OuterVolumeSpecName: "utilities") pod "541c85bc-305f-4a9e-9d2b-54014aa5e0f3" (UID: "541c85bc-305f-4a9e-9d2b-54014aa5e0f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.726334 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.735390 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-kube-api-access-zqpxc" (OuterVolumeSpecName: "kube-api-access-zqpxc") pod "541c85bc-305f-4a9e-9d2b-54014aa5e0f3" (UID: "541c85bc-305f-4a9e-9d2b-54014aa5e0f3"). InnerVolumeSpecName "kube-api-access-zqpxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.738637 4903 scope.go:117] "RemoveContainer" containerID="73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf" Jan 28 16:47:19 crc kubenswrapper[4903]: E0128 16:47:19.739844 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf\": container with ID starting with 73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf not found: ID does not exist" containerID="73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.739919 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf"} err="failed to get container status \"73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf\": rpc error: code = NotFound desc = could not find container \"73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf\": container with ID starting with 73a2e1cddbb7e80452ce4f7a0833ac14908e63f3997da256ed2a3cb571e963bf not found: ID does not exist" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.739949 4903 scope.go:117] "RemoveContainer" containerID="e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be" Jan 28 16:47:19 crc kubenswrapper[4903]: E0128 16:47:19.741225 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be\": container with ID starting with e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be not found: ID does not exist" containerID="e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.741269 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be"} err="failed to get container status \"e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be\": rpc error: code = NotFound desc = could not find container \"e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be\": container with ID starting with e4465888f51271cb68077942f8d88f5e487283be66dfaf26ebd525437046f9be not found: ID does not exist" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.741298 4903 scope.go:117] "RemoveContainer" containerID="1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060" Jan 28 16:47:19 crc kubenswrapper[4903]: E0128 16:47:19.744185 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060\": container with ID starting with 1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060 not found: ID does not exist" containerID="1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.744291 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060"} err="failed to get container status \"1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060\": rpc error: code = NotFound desc = could not find container \"1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060\": container with ID starting with 1da6d992da508734acbd26bb3d44df9d59a20d7fce1b006e658abe81c5356060 not found: ID does not exist" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.761308 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "541c85bc-305f-4a9e-9d2b-54014aa5e0f3" (UID: "541c85bc-305f-4a9e-9d2b-54014aa5e0f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.827379 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:47:19 crc kubenswrapper[4903]: I0128 16:47:19.827430 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqpxc\" (UniqueName: \"kubernetes.io/projected/541c85bc-305f-4a9e-9d2b-54014aa5e0f3-kube-api-access-zqpxc\") on node \"crc\" DevicePath \"\"" Jan 28 16:47:20 crc kubenswrapper[4903]: I0128 16:47:20.004267 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-87dpn"] Jan 28 16:47:20 crc kubenswrapper[4903]: I0128 16:47:20.027931 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-87dpn"] Jan 28 16:47:20 crc kubenswrapper[4903]: I0128 16:47:20.428202 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" path="/var/lib/kubelet/pods/541c85bc-305f-4a9e-9d2b-54014aa5e0f3/volumes" Jan 28 16:47:29 crc kubenswrapper[4903]: I0128 16:47:29.414303 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:47:29 crc kubenswrapper[4903]: E0128 16:47:29.415521 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:47:44 crc kubenswrapper[4903]: I0128 16:47:44.414924 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:47:44 crc kubenswrapper[4903]: E0128 16:47:44.416032 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:47:57 crc kubenswrapper[4903]: I0128 16:47:57.414203 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:47:57 crc kubenswrapper[4903]: E0128 16:47:57.415106 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:48:10 crc kubenswrapper[4903]: I0128 16:48:10.414856 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:48:10 crc kubenswrapper[4903]: E0128 16:48:10.415744 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:48:21 crc kubenswrapper[4903]: I0128 16:48:21.413495 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:48:21 crc kubenswrapper[4903]: E0128 16:48:21.414411 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:48:33 crc kubenswrapper[4903]: I0128 16:48:33.413439 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:48:33 crc kubenswrapper[4903]: E0128 16:48:33.414570 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:48:45 crc kubenswrapper[4903]: I0128 16:48:45.413989 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:48:45 crc kubenswrapper[4903]: E0128 16:48:45.414796 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:48:59 crc kubenswrapper[4903]: I0128 16:48:59.412966 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:48:59 crc kubenswrapper[4903]: E0128 16:48:59.413693 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:49:12 crc kubenswrapper[4903]: I0128 16:49:12.413163 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:49:12 crc kubenswrapper[4903]: E0128 16:49:12.413998 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:49:26 crc kubenswrapper[4903]: I0128 16:49:26.414744 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:49:26 crc kubenswrapper[4903]: E0128 16:49:26.415950 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:49:41 crc kubenswrapper[4903]: I0128 16:49:41.413768 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:49:41 crc kubenswrapper[4903]: E0128 16:49:41.414829 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:49:56 crc kubenswrapper[4903]: I0128 16:49:56.413616 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:49:56 crc kubenswrapper[4903]: E0128 16:49:56.414706 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:50:08 crc kubenswrapper[4903]: I0128 16:50:08.418615 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:50:08 crc kubenswrapper[4903]: E0128 16:50:08.419500 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:50:21 crc kubenswrapper[4903]: I0128 16:50:21.413179 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:50:21 crc kubenswrapper[4903]: E0128 16:50:21.413898 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:50:34 crc kubenswrapper[4903]: I0128 16:50:34.413287 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:50:34 crc kubenswrapper[4903]: E0128 16:50:34.414133 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:50:49 crc kubenswrapper[4903]: I0128 16:50:49.413421 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:50:49 crc kubenswrapper[4903]: E0128 16:50:49.414308 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:51:03 crc kubenswrapper[4903]: I0128 16:51:03.413650 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:51:03 crc kubenswrapper[4903]: I0128 16:51:03.983074 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"d0989be99f715ef3ad88767480dbb1504c32fe96bce1dec318dce52bcef4c563"} Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.017248 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bzgg7"] Jan 28 16:51:04 crc kubenswrapper[4903]: E0128 16:51:04.017718 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="registry-server" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.017741 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="registry-server" Jan 28 16:51:04 crc kubenswrapper[4903]: E0128 16:51:04.017768 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="extract-utilities" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.017777 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="extract-utilities" Jan 28 16:51:04 crc kubenswrapper[4903]: E0128 16:51:04.017801 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="extract-content" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.017810 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="extract-content" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.018009 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="541c85bc-305f-4a9e-9d2b-54014aa5e0f3" containerName="registry-server" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.019365 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.029960 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzgg7"] Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.089208 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-catalog-content\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.089295 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nxz4\" (UniqueName: \"kubernetes.io/projected/6744b37a-f7da-4a9b-9121-7367e4829705-kube-api-access-5nxz4\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.089380 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-utilities\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.191276 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-catalog-content\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.191620 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nxz4\" (UniqueName: \"kubernetes.io/projected/6744b37a-f7da-4a9b-9121-7367e4829705-kube-api-access-5nxz4\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.191690 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-utilities\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.191933 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-catalog-content\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.192091 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-utilities\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.216283 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nxz4\" (UniqueName: \"kubernetes.io/projected/6744b37a-f7da-4a9b-9121-7367e4829705-kube-api-access-5nxz4\") pod \"certified-operators-bzgg7\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.353575 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.641755 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzgg7"] Jan 28 16:51:04 crc kubenswrapper[4903]: W0128 16:51:04.645223 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6744b37a_f7da_4a9b_9121_7367e4829705.slice/crio-9600326c6937323d0f6ba2dde4647b3c7b4fc529c1af8bb1dfd1b742a5f4e858 WatchSource:0}: Error finding container 9600326c6937323d0f6ba2dde4647b3c7b4fc529c1af8bb1dfd1b742a5f4e858: Status 404 returned error can't find the container with id 9600326c6937323d0f6ba2dde4647b3c7b4fc529c1af8bb1dfd1b742a5f4e858 Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.990005 4903 generic.go:334] "Generic (PLEG): container finished" podID="6744b37a-f7da-4a9b-9121-7367e4829705" containerID="bdf8ad8aa903e5a67f57c618583f9d3ec2b9050e56e5df3022a81d168703ab51" exitCode=0 Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.990063 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgg7" event={"ID":"6744b37a-f7da-4a9b-9121-7367e4829705","Type":"ContainerDied","Data":"bdf8ad8aa903e5a67f57c618583f9d3ec2b9050e56e5df3022a81d168703ab51"} Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.990335 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgg7" event={"ID":"6744b37a-f7da-4a9b-9121-7367e4829705","Type":"ContainerStarted","Data":"9600326c6937323d0f6ba2dde4647b3c7b4fc529c1af8bb1dfd1b742a5f4e858"} Jan 28 16:51:04 crc kubenswrapper[4903]: I0128 16:51:04.991609 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:51:06 crc kubenswrapper[4903]: I0128 16:51:06.001715 4903 generic.go:334] "Generic (PLEG): container finished" podID="6744b37a-f7da-4a9b-9121-7367e4829705" containerID="34b38b8a2d5b2033cd3db6f497c4bd2ed483bacf67070e7b59db3a56ec63d62d" exitCode=0 Jan 28 16:51:06 crc kubenswrapper[4903]: I0128 16:51:06.002259 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgg7" event={"ID":"6744b37a-f7da-4a9b-9121-7367e4829705","Type":"ContainerDied","Data":"34b38b8a2d5b2033cd3db6f497c4bd2ed483bacf67070e7b59db3a56ec63d62d"} Jan 28 16:51:07 crc kubenswrapper[4903]: I0128 16:51:07.011799 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgg7" event={"ID":"6744b37a-f7da-4a9b-9121-7367e4829705","Type":"ContainerStarted","Data":"8ed7cc4a15753f327929fd320a375ef39a2882214efadd8dc81a4840794911ac"} Jan 28 16:51:07 crc kubenswrapper[4903]: I0128 16:51:07.035186 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bzgg7" podStartSLOduration=2.250108306 podStartE2EDuration="4.035169124s" podCreationTimestamp="2026-01-28 16:51:03 +0000 UTC" firstStartedPulling="2026-01-28 16:51:04.991389445 +0000 UTC m=+3937.267360956" lastFinishedPulling="2026-01-28 16:51:06.776450263 +0000 UTC m=+3939.052421774" observedRunningTime="2026-01-28 16:51:07.029746636 +0000 UTC m=+3939.305718147" watchObservedRunningTime="2026-01-28 16:51:07.035169124 +0000 UTC m=+3939.311140635" Jan 28 16:51:14 crc kubenswrapper[4903]: I0128 16:51:14.353945 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:14 crc kubenswrapper[4903]: I0128 16:51:14.354621 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:14 crc kubenswrapper[4903]: I0128 16:51:14.402174 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:15 crc kubenswrapper[4903]: I0128 16:51:15.123802 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:15 crc kubenswrapper[4903]: I0128 16:51:15.176895 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzgg7"] Jan 28 16:51:17 crc kubenswrapper[4903]: I0128 16:51:17.087893 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bzgg7" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="registry-server" containerID="cri-o://8ed7cc4a15753f327929fd320a375ef39a2882214efadd8dc81a4840794911ac" gracePeriod=2 Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.097471 4903 generic.go:334] "Generic (PLEG): container finished" podID="6744b37a-f7da-4a9b-9121-7367e4829705" containerID="8ed7cc4a15753f327929fd320a375ef39a2882214efadd8dc81a4840794911ac" exitCode=0 Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.097543 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgg7" event={"ID":"6744b37a-f7da-4a9b-9121-7367e4829705","Type":"ContainerDied","Data":"8ed7cc4a15753f327929fd320a375ef39a2882214efadd8dc81a4840794911ac"} Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.746590 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.812433 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nxz4\" (UniqueName: \"kubernetes.io/projected/6744b37a-f7da-4a9b-9121-7367e4829705-kube-api-access-5nxz4\") pod \"6744b37a-f7da-4a9b-9121-7367e4829705\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.813186 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-catalog-content\") pod \"6744b37a-f7da-4a9b-9121-7367e4829705\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.813255 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-utilities\") pod \"6744b37a-f7da-4a9b-9121-7367e4829705\" (UID: \"6744b37a-f7da-4a9b-9121-7367e4829705\") " Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.814806 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-utilities" (OuterVolumeSpecName: "utilities") pod "6744b37a-f7da-4a9b-9121-7367e4829705" (UID: "6744b37a-f7da-4a9b-9121-7367e4829705"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.830571 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6744b37a-f7da-4a9b-9121-7367e4829705-kube-api-access-5nxz4" (OuterVolumeSpecName: "kube-api-access-5nxz4") pod "6744b37a-f7da-4a9b-9121-7367e4829705" (UID: "6744b37a-f7da-4a9b-9121-7367e4829705"). InnerVolumeSpecName "kube-api-access-5nxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.877588 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6744b37a-f7da-4a9b-9121-7367e4829705" (UID: "6744b37a-f7da-4a9b-9121-7367e4829705"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.915295 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.915341 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nxz4\" (UniqueName: \"kubernetes.io/projected/6744b37a-f7da-4a9b-9121-7367e4829705-kube-api-access-5nxz4\") on node \"crc\" DevicePath \"\"" Jan 28 16:51:18 crc kubenswrapper[4903]: I0128 16:51:18.915388 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6744b37a-f7da-4a9b-9121-7367e4829705-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:51:19 crc kubenswrapper[4903]: I0128 16:51:19.108059 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgg7" event={"ID":"6744b37a-f7da-4a9b-9121-7367e4829705","Type":"ContainerDied","Data":"9600326c6937323d0f6ba2dde4647b3c7b4fc529c1af8bb1dfd1b742a5f4e858"} Jan 28 16:51:19 crc kubenswrapper[4903]: I0128 16:51:19.108121 4903 scope.go:117] "RemoveContainer" containerID="8ed7cc4a15753f327929fd320a375ef39a2882214efadd8dc81a4840794911ac" Jan 28 16:51:19 crc kubenswrapper[4903]: I0128 16:51:19.108278 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgg7" Jan 28 16:51:19 crc kubenswrapper[4903]: I0128 16:51:19.146382 4903 scope.go:117] "RemoveContainer" containerID="34b38b8a2d5b2033cd3db6f497c4bd2ed483bacf67070e7b59db3a56ec63d62d" Jan 28 16:51:19 crc kubenswrapper[4903]: I0128 16:51:19.153748 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzgg7"] Jan 28 16:51:19 crc kubenswrapper[4903]: I0128 16:51:19.159547 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bzgg7"] Jan 28 16:51:19 crc kubenswrapper[4903]: I0128 16:51:19.392514 4903 scope.go:117] "RemoveContainer" containerID="bdf8ad8aa903e5a67f57c618583f9d3ec2b9050e56e5df3022a81d168703ab51" Jan 28 16:51:20 crc kubenswrapper[4903]: I0128 16:51:20.423509 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" path="/var/lib/kubelet/pods/6744b37a-f7da-4a9b-9121-7367e4829705/volumes" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.299968 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jdgbt"] Jan 28 16:51:23 crc kubenswrapper[4903]: E0128 16:51:23.304284 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="registry-server" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.304322 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="registry-server" Jan 28 16:51:23 crc kubenswrapper[4903]: E0128 16:51:23.306612 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="extract-utilities" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.306657 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="extract-utilities" Jan 28 16:51:23 crc kubenswrapper[4903]: E0128 16:51:23.306695 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="extract-content" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.306824 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="extract-content" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.317991 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6744b37a-f7da-4a9b-9121-7367e4829705" containerName="registry-server" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.324875 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdgbt"] Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.325077 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.379890 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-catalog-content\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.379953 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w29ps\" (UniqueName: \"kubernetes.io/projected/b60ae370-177e-473f-b9f5-86195f1d88b1-kube-api-access-w29ps\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.379977 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-utilities\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.481643 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-catalog-content\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.481713 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w29ps\" (UniqueName: \"kubernetes.io/projected/b60ae370-177e-473f-b9f5-86195f1d88b1-kube-api-access-w29ps\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.481737 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-utilities\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.482129 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-catalog-content\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.482317 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-utilities\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.501261 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w29ps\" (UniqueName: \"kubernetes.io/projected/b60ae370-177e-473f-b9f5-86195f1d88b1-kube-api-access-w29ps\") pod \"redhat-operators-jdgbt\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:23 crc kubenswrapper[4903]: I0128 16:51:23.649715 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:24 crc kubenswrapper[4903]: I0128 16:51:24.163842 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jdgbt"] Jan 28 16:51:25 crc kubenswrapper[4903]: I0128 16:51:25.154745 4903 generic.go:334] "Generic (PLEG): container finished" podID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerID="ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d" exitCode=0 Jan 28 16:51:25 crc kubenswrapper[4903]: I0128 16:51:25.155157 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdgbt" event={"ID":"b60ae370-177e-473f-b9f5-86195f1d88b1","Type":"ContainerDied","Data":"ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d"} Jan 28 16:51:25 crc kubenswrapper[4903]: I0128 16:51:25.155203 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdgbt" event={"ID":"b60ae370-177e-473f-b9f5-86195f1d88b1","Type":"ContainerStarted","Data":"703d533dbd9a93eb1ad198c665b897a2ab48f801e69144917956401b182aa22a"} Jan 28 16:51:28 crc kubenswrapper[4903]: I0128 16:51:28.209058 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdgbt" event={"ID":"b60ae370-177e-473f-b9f5-86195f1d88b1","Type":"ContainerDied","Data":"e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b"} Jan 28 16:51:28 crc kubenswrapper[4903]: I0128 16:51:28.208972 4903 generic.go:334] "Generic (PLEG): container finished" podID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerID="e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b" exitCode=0 Jan 28 16:51:30 crc kubenswrapper[4903]: I0128 16:51:30.225697 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdgbt" event={"ID":"b60ae370-177e-473f-b9f5-86195f1d88b1","Type":"ContainerStarted","Data":"9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279"} Jan 28 16:51:30 crc kubenswrapper[4903]: I0128 16:51:30.253874 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jdgbt" podStartSLOduration=3.030603445 podStartE2EDuration="7.253859954s" podCreationTimestamp="2026-01-28 16:51:23 +0000 UTC" firstStartedPulling="2026-01-28 16:51:25.156912999 +0000 UTC m=+3957.432884510" lastFinishedPulling="2026-01-28 16:51:29.380169508 +0000 UTC m=+3961.656141019" observedRunningTime="2026-01-28 16:51:30.250558184 +0000 UTC m=+3962.526529705" watchObservedRunningTime="2026-01-28 16:51:30.253859954 +0000 UTC m=+3962.529831465" Jan 28 16:51:33 crc kubenswrapper[4903]: I0128 16:51:33.650785 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:33 crc kubenswrapper[4903]: I0128 16:51:33.652164 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:34 crc kubenswrapper[4903]: I0128 16:51:34.698012 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jdgbt" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="registry-server" probeResult="failure" output=< Jan 28 16:51:34 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 16:51:34 crc kubenswrapper[4903]: > Jan 28 16:51:43 crc kubenswrapper[4903]: I0128 16:51:43.695627 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:43 crc kubenswrapper[4903]: I0128 16:51:43.747927 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:44 crc kubenswrapper[4903]: I0128 16:51:44.906356 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jdgbt"] Jan 28 16:51:45 crc kubenswrapper[4903]: I0128 16:51:45.338650 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jdgbt" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="registry-server" containerID="cri-o://9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279" gracePeriod=2 Jan 28 16:51:45 crc kubenswrapper[4903]: I0128 16:51:45.921790 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.034111 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w29ps\" (UniqueName: \"kubernetes.io/projected/b60ae370-177e-473f-b9f5-86195f1d88b1-kube-api-access-w29ps\") pod \"b60ae370-177e-473f-b9f5-86195f1d88b1\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.034410 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-utilities\") pod \"b60ae370-177e-473f-b9f5-86195f1d88b1\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.034682 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-catalog-content\") pod \"b60ae370-177e-473f-b9f5-86195f1d88b1\" (UID: \"b60ae370-177e-473f-b9f5-86195f1d88b1\") " Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.035426 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-utilities" (OuterVolumeSpecName: "utilities") pod "b60ae370-177e-473f-b9f5-86195f1d88b1" (UID: "b60ae370-177e-473f-b9f5-86195f1d88b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.041619 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60ae370-177e-473f-b9f5-86195f1d88b1-kube-api-access-w29ps" (OuterVolumeSpecName: "kube-api-access-w29ps") pod "b60ae370-177e-473f-b9f5-86195f1d88b1" (UID: "b60ae370-177e-473f-b9f5-86195f1d88b1"). InnerVolumeSpecName "kube-api-access-w29ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.136227 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w29ps\" (UniqueName: \"kubernetes.io/projected/b60ae370-177e-473f-b9f5-86195f1d88b1-kube-api-access-w29ps\") on node \"crc\" DevicePath \"\"" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.136272 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.150615 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b60ae370-177e-473f-b9f5-86195f1d88b1" (UID: "b60ae370-177e-473f-b9f5-86195f1d88b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.238279 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b60ae370-177e-473f-b9f5-86195f1d88b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.349676 4903 generic.go:334] "Generic (PLEG): container finished" podID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerID="9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279" exitCode=0 Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.349731 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jdgbt" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.349733 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdgbt" event={"ID":"b60ae370-177e-473f-b9f5-86195f1d88b1","Type":"ContainerDied","Data":"9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279"} Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.349862 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jdgbt" event={"ID":"b60ae370-177e-473f-b9f5-86195f1d88b1","Type":"ContainerDied","Data":"703d533dbd9a93eb1ad198c665b897a2ab48f801e69144917956401b182aa22a"} Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.349885 4903 scope.go:117] "RemoveContainer" containerID="9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.378889 4903 scope.go:117] "RemoveContainer" containerID="e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.390682 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jdgbt"] Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.396339 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jdgbt"] Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.413319 4903 scope.go:117] "RemoveContainer" containerID="ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.427703 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" path="/var/lib/kubelet/pods/b60ae370-177e-473f-b9f5-86195f1d88b1/volumes" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.431388 4903 scope.go:117] "RemoveContainer" containerID="9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279" Jan 28 16:51:46 crc kubenswrapper[4903]: E0128 16:51:46.431856 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279\": container with ID starting with 9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279 not found: ID does not exist" containerID="9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.431897 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279"} err="failed to get container status \"9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279\": rpc error: code = NotFound desc = could not find container \"9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279\": container with ID starting with 9edf4cc0d83922eabbd1cd063ace85ed611d1f3dca4cece8c554c3cbaca97279 not found: ID does not exist" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.431921 4903 scope.go:117] "RemoveContainer" containerID="e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b" Jan 28 16:51:46 crc kubenswrapper[4903]: E0128 16:51:46.433744 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b\": container with ID starting with e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b not found: ID does not exist" containerID="e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.453260 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b"} err="failed to get container status \"e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b\": rpc error: code = NotFound desc = could not find container \"e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b\": container with ID starting with e20f1f14132b6d66b64c3b103c27fa7c6551e4b782bc109905d30f2864a65d8b not found: ID does not exist" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.453345 4903 scope.go:117] "RemoveContainer" containerID="ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d" Jan 28 16:51:46 crc kubenswrapper[4903]: E0128 16:51:46.454439 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d\": container with ID starting with ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d not found: ID does not exist" containerID="ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d" Jan 28 16:51:46 crc kubenswrapper[4903]: I0128 16:51:46.454608 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d"} err="failed to get container status \"ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d\": rpc error: code = NotFound desc = could not find container \"ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d\": container with ID starting with ab8d5968941e1cf796d3345bf59766acd163a3e60099ff8ed0e70b059c5fe45d not found: ID does not exist" Jan 28 16:53:26 crc kubenswrapper[4903]: I0128 16:53:26.614283 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:53:26 crc kubenswrapper[4903]: I0128 16:53:26.614873 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:53:56 crc kubenswrapper[4903]: I0128 16:53:56.613733 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:53:56 crc kubenswrapper[4903]: I0128 16:53:56.614554 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:54:26 crc kubenswrapper[4903]: I0128 16:54:26.613499 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:54:26 crc kubenswrapper[4903]: I0128 16:54:26.614133 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:54:26 crc kubenswrapper[4903]: I0128 16:54:26.614196 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:54:26 crc kubenswrapper[4903]: I0128 16:54:26.614946 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0989be99f715ef3ad88767480dbb1504c32fe96bce1dec318dce52bcef4c563"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:54:26 crc kubenswrapper[4903]: I0128 16:54:26.615036 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://d0989be99f715ef3ad88767480dbb1504c32fe96bce1dec318dce52bcef4c563" gracePeriod=600 Jan 28 16:54:27 crc kubenswrapper[4903]: I0128 16:54:27.548011 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="d0989be99f715ef3ad88767480dbb1504c32fe96bce1dec318dce52bcef4c563" exitCode=0 Jan 28 16:54:27 crc kubenswrapper[4903]: I0128 16:54:27.548104 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"d0989be99f715ef3ad88767480dbb1504c32fe96bce1dec318dce52bcef4c563"} Jan 28 16:54:27 crc kubenswrapper[4903]: I0128 16:54:27.548300 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7"} Jan 28 16:54:27 crc kubenswrapper[4903]: I0128 16:54:27.548321 4903 scope.go:117] "RemoveContainer" containerID="96d01c1b665615c7953f655c4d1bd102cb2f99aedd9f7c8b113956174841c454" Jan 28 16:56:26 crc kubenswrapper[4903]: I0128 16:56:26.613365 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:56:26 crc kubenswrapper[4903]: I0128 16:56:26.613942 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:56:56 crc kubenswrapper[4903]: I0128 16:56:56.613801 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:56:56 crc kubenswrapper[4903]: I0128 16:56:56.614441 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.219865 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-584v9"] Jan 28 16:57:21 crc kubenswrapper[4903]: E0128 16:57:21.220774 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="extract-utilities" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.220788 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="extract-utilities" Jan 28 16:57:21 crc kubenswrapper[4903]: E0128 16:57:21.220811 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="extract-content" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.220817 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="extract-content" Jan 28 16:57:21 crc kubenswrapper[4903]: E0128 16:57:21.220832 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="registry-server" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.220838 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="registry-server" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.220963 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60ae370-177e-473f-b9f5-86195f1d88b1" containerName="registry-server" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.221882 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.237081 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-584v9"] Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.332411 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgmc\" (UniqueName: \"kubernetes.io/projected/e9564d1a-8001-483d-9bde-f7373c044639-kube-api-access-4jgmc\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.332532 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-utilities\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.332674 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-catalog-content\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.434132 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jgmc\" (UniqueName: \"kubernetes.io/projected/e9564d1a-8001-483d-9bde-f7373c044639-kube-api-access-4jgmc\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.434211 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-utilities\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.434236 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-catalog-content\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.434788 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-utilities\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.434811 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-catalog-content\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.454402 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jgmc\" (UniqueName: \"kubernetes.io/projected/e9564d1a-8001-483d-9bde-f7373c044639-kube-api-access-4jgmc\") pod \"redhat-marketplace-584v9\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.538844 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:21 crc kubenswrapper[4903]: I0128 16:57:21.988787 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-584v9"] Jan 28 16:57:22 crc kubenswrapper[4903]: I0128 16:57:22.811638 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584v9" event={"ID":"e9564d1a-8001-483d-9bde-f7373c044639","Type":"ContainerStarted","Data":"f5157940b43320bf40503789ab5f689d63ca681dfa910e49646c9b2f87030bc6"} Jan 28 16:57:23 crc kubenswrapper[4903]: I0128 16:57:23.820529 4903 generic.go:334] "Generic (PLEG): container finished" podID="e9564d1a-8001-483d-9bde-f7373c044639" containerID="a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f" exitCode=0 Jan 28 16:57:23 crc kubenswrapper[4903]: I0128 16:57:23.820661 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584v9" event={"ID":"e9564d1a-8001-483d-9bde-f7373c044639","Type":"ContainerDied","Data":"a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f"} Jan 28 16:57:23 crc kubenswrapper[4903]: I0128 16:57:23.823092 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.614182 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.614742 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.614786 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.615206 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.615255 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" gracePeriod=600 Jan 28 16:57:26 crc kubenswrapper[4903]: E0128 16:57:26.753993 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.847068 4903 generic.go:334] "Generic (PLEG): container finished" podID="e9564d1a-8001-483d-9bde-f7373c044639" containerID="cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb" exitCode=0 Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.847148 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584v9" event={"ID":"e9564d1a-8001-483d-9bde-f7373c044639","Type":"ContainerDied","Data":"cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb"} Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.850052 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" exitCode=0 Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.850092 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7"} Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.850175 4903 scope.go:117] "RemoveContainer" containerID="d0989be99f715ef3ad88767480dbb1504c32fe96bce1dec318dce52bcef4c563" Jan 28 16:57:26 crc kubenswrapper[4903]: I0128 16:57:26.851768 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:57:26 crc kubenswrapper[4903]: E0128 16:57:26.852153 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:57:28 crc kubenswrapper[4903]: I0128 16:57:28.868306 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584v9" event={"ID":"e9564d1a-8001-483d-9bde-f7373c044639","Type":"ContainerStarted","Data":"2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b"} Jan 28 16:57:28 crc kubenswrapper[4903]: I0128 16:57:28.891885 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-584v9" podStartSLOduration=3.970845912 podStartE2EDuration="7.891858856s" podCreationTimestamp="2026-01-28 16:57:21 +0000 UTC" firstStartedPulling="2026-01-28 16:57:23.822574713 +0000 UTC m=+4316.098546224" lastFinishedPulling="2026-01-28 16:57:27.743587637 +0000 UTC m=+4320.019559168" observedRunningTime="2026-01-28 16:57:28.885826772 +0000 UTC m=+4321.161798283" watchObservedRunningTime="2026-01-28 16:57:28.891858856 +0000 UTC m=+4321.167830367" Jan 28 16:57:31 crc kubenswrapper[4903]: I0128 16:57:31.540079 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:31 crc kubenswrapper[4903]: I0128 16:57:31.540356 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:31 crc kubenswrapper[4903]: I0128 16:57:31.581033 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:41 crc kubenswrapper[4903]: I0128 16:57:41.413892 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:57:41 crc kubenswrapper[4903]: E0128 16:57:41.414612 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:57:41 crc kubenswrapper[4903]: I0128 16:57:41.581819 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:41 crc kubenswrapper[4903]: I0128 16:57:41.633643 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-584v9"] Jan 28 16:57:41 crc kubenswrapper[4903]: I0128 16:57:41.948352 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-584v9" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="registry-server" containerID="cri-o://2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b" gracePeriod=2 Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.535237 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.658522 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-utilities\") pod \"e9564d1a-8001-483d-9bde-f7373c044639\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.658754 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jgmc\" (UniqueName: \"kubernetes.io/projected/e9564d1a-8001-483d-9bde-f7373c044639-kube-api-access-4jgmc\") pod \"e9564d1a-8001-483d-9bde-f7373c044639\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.658815 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-catalog-content\") pod \"e9564d1a-8001-483d-9bde-f7373c044639\" (UID: \"e9564d1a-8001-483d-9bde-f7373c044639\") " Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.659897 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-utilities" (OuterVolumeSpecName: "utilities") pod "e9564d1a-8001-483d-9bde-f7373c044639" (UID: "e9564d1a-8001-483d-9bde-f7373c044639"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.670849 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9564d1a-8001-483d-9bde-f7373c044639-kube-api-access-4jgmc" (OuterVolumeSpecName: "kube-api-access-4jgmc") pod "e9564d1a-8001-483d-9bde-f7373c044639" (UID: "e9564d1a-8001-483d-9bde-f7373c044639"). InnerVolumeSpecName "kube-api-access-4jgmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.683353 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9564d1a-8001-483d-9bde-f7373c044639" (UID: "e9564d1a-8001-483d-9bde-f7373c044639"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.760825 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jgmc\" (UniqueName: \"kubernetes.io/projected/e9564d1a-8001-483d-9bde-f7373c044639-kube-api-access-4jgmc\") on node \"crc\" DevicePath \"\"" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.760872 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.760886 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9564d1a-8001-483d-9bde-f7373c044639-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.957666 4903 generic.go:334] "Generic (PLEG): container finished" podID="e9564d1a-8001-483d-9bde-f7373c044639" containerID="2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b" exitCode=0 Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.957709 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584v9" event={"ID":"e9564d1a-8001-483d-9bde-f7373c044639","Type":"ContainerDied","Data":"2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b"} Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.957763 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-584v9" event={"ID":"e9564d1a-8001-483d-9bde-f7373c044639","Type":"ContainerDied","Data":"f5157940b43320bf40503789ab5f689d63ca681dfa910e49646c9b2f87030bc6"} Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.957782 4903 scope.go:117] "RemoveContainer" containerID="2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.957788 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-584v9" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.979340 4903 scope.go:117] "RemoveContainer" containerID="cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb" Jan 28 16:57:42 crc kubenswrapper[4903]: I0128 16:57:42.994094 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-584v9"] Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.003433 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-584v9"] Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.006993 4903 scope.go:117] "RemoveContainer" containerID="a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f" Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.025274 4903 scope.go:117] "RemoveContainer" containerID="2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b" Jan 28 16:57:43 crc kubenswrapper[4903]: E0128 16:57:43.025792 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b\": container with ID starting with 2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b not found: ID does not exist" containerID="2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b" Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.025836 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b"} err="failed to get container status \"2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b\": rpc error: code = NotFound desc = could not find container \"2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b\": container with ID starting with 2edc28960c20a223f2ca3146f61b98af85bb1fad2bc19c475e7619d35cf45f4b not found: ID does not exist" Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.025863 4903 scope.go:117] "RemoveContainer" containerID="cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb" Jan 28 16:57:43 crc kubenswrapper[4903]: E0128 16:57:43.026100 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb\": container with ID starting with cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb not found: ID does not exist" containerID="cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb" Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.026125 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb"} err="failed to get container status \"cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb\": rpc error: code = NotFound desc = could not find container \"cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb\": container with ID starting with cd6b9f51ca764d8978ac6a135de4790056c95733949420099ccf8fd3353edabb not found: ID does not exist" Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.026145 4903 scope.go:117] "RemoveContainer" containerID="a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f" Jan 28 16:57:43 crc kubenswrapper[4903]: E0128 16:57:43.026606 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f\": container with ID starting with a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f not found: ID does not exist" containerID="a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f" Jan 28 16:57:43 crc kubenswrapper[4903]: I0128 16:57:43.026629 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f"} err="failed to get container status \"a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f\": rpc error: code = NotFound desc = could not find container \"a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f\": container with ID starting with a53ef2b9ee1574f235410fe3c8cc2404f48cc79af1e2a025563c8980ebf3cd3f not found: ID does not exist" Jan 28 16:57:44 crc kubenswrapper[4903]: I0128 16:57:44.421991 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9564d1a-8001-483d-9bde-f7373c044639" path="/var/lib/kubelet/pods/e9564d1a-8001-483d-9bde-f7373c044639/volumes" Jan 28 16:57:54 crc kubenswrapper[4903]: I0128 16:57:54.413515 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:57:54 crc kubenswrapper[4903]: E0128 16:57:54.414339 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:58:07 crc kubenswrapper[4903]: I0128 16:58:07.412913 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:58:07 crc kubenswrapper[4903]: E0128 16:58:07.413573 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:58:22 crc kubenswrapper[4903]: I0128 16:58:22.414157 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:58:22 crc kubenswrapper[4903]: E0128 16:58:22.414941 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:58:33 crc kubenswrapper[4903]: I0128 16:58:33.413927 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:58:33 crc kubenswrapper[4903]: E0128 16:58:33.414760 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:58:46 crc kubenswrapper[4903]: I0128 16:58:46.414346 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:58:46 crc kubenswrapper[4903]: E0128 16:58:46.415653 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:58:58 crc kubenswrapper[4903]: I0128 16:58:58.418110 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:58:58 crc kubenswrapper[4903]: E0128 16:58:58.418945 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:59:11 crc kubenswrapper[4903]: I0128 16:59:11.414132 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:59:11 crc kubenswrapper[4903]: E0128 16:59:11.414800 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:59:25 crc kubenswrapper[4903]: I0128 16:59:25.412962 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:59:25 crc kubenswrapper[4903]: E0128 16:59:25.415175 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:59:36 crc kubenswrapper[4903]: I0128 16:59:36.413328 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:59:36 crc kubenswrapper[4903]: E0128 16:59:36.414174 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 16:59:51 crc kubenswrapper[4903]: I0128 16:59:51.413040 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 16:59:51 crc kubenswrapper[4903]: E0128 16:59:51.414028 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.193377 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg"] Jan 28 17:00:00 crc kubenswrapper[4903]: E0128 17:00:00.194459 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="extract-content" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.194480 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="extract-content" Jan 28 17:00:00 crc kubenswrapper[4903]: E0128 17:00:00.194501 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="extract-utilities" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.194511 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="extract-utilities" Jan 28 17:00:00 crc kubenswrapper[4903]: E0128 17:00:00.194649 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="registry-server" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.194661 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="registry-server" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.194842 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9564d1a-8001-483d-9bde-f7373c044639" containerName="registry-server" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.195503 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.197989 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.198567 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.210776 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg"] Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.260841 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/900499d1-401f-47f7-8646-e86b1edcaece-secret-volume\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.260933 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmq7p\" (UniqueName: \"kubernetes.io/projected/900499d1-401f-47f7-8646-e86b1edcaece-kube-api-access-tmq7p\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.260987 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/900499d1-401f-47f7-8646-e86b1edcaece-config-volume\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.362262 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/900499d1-401f-47f7-8646-e86b1edcaece-secret-volume\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.362434 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmq7p\" (UniqueName: \"kubernetes.io/projected/900499d1-401f-47f7-8646-e86b1edcaece-kube-api-access-tmq7p\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.362506 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/900499d1-401f-47f7-8646-e86b1edcaece-config-volume\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.363410 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/900499d1-401f-47f7-8646-e86b1edcaece-config-volume\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.371405 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/900499d1-401f-47f7-8646-e86b1edcaece-secret-volume\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.379349 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmq7p\" (UniqueName: \"kubernetes.io/projected/900499d1-401f-47f7-8646-e86b1edcaece-kube-api-access-tmq7p\") pod \"collect-profiles-29493660-g87gg\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.514909 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.950617 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg"] Jan 28 17:00:00 crc kubenswrapper[4903]: I0128 17:00:00.989344 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" event={"ID":"900499d1-401f-47f7-8646-e86b1edcaece","Type":"ContainerStarted","Data":"e520f80cc71b4123f238fee7c9c9f157c2fc2aa3a31fc97143bc22024078bd8f"} Jan 28 17:00:02 crc kubenswrapper[4903]: I0128 17:00:02.002055 4903 generic.go:334] "Generic (PLEG): container finished" podID="900499d1-401f-47f7-8646-e86b1edcaece" containerID="b8788feddf94c8f2d1c2d6fbdd25bb373ddf45c088b0502ee07b79b4152f37ec" exitCode=0 Jan 28 17:00:02 crc kubenswrapper[4903]: I0128 17:00:02.002171 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" event={"ID":"900499d1-401f-47f7-8646-e86b1edcaece","Type":"ContainerDied","Data":"b8788feddf94c8f2d1c2d6fbdd25bb373ddf45c088b0502ee07b79b4152f37ec"} Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.291542 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.417905 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/900499d1-401f-47f7-8646-e86b1edcaece-secret-volume\") pod \"900499d1-401f-47f7-8646-e86b1edcaece\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.417964 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmq7p\" (UniqueName: \"kubernetes.io/projected/900499d1-401f-47f7-8646-e86b1edcaece-kube-api-access-tmq7p\") pod \"900499d1-401f-47f7-8646-e86b1edcaece\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.417992 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/900499d1-401f-47f7-8646-e86b1edcaece-config-volume\") pod \"900499d1-401f-47f7-8646-e86b1edcaece\" (UID: \"900499d1-401f-47f7-8646-e86b1edcaece\") " Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.419083 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900499d1-401f-47f7-8646-e86b1edcaece-config-volume" (OuterVolumeSpecName: "config-volume") pod "900499d1-401f-47f7-8646-e86b1edcaece" (UID: "900499d1-401f-47f7-8646-e86b1edcaece"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.423304 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900499d1-401f-47f7-8646-e86b1edcaece-kube-api-access-tmq7p" (OuterVolumeSpecName: "kube-api-access-tmq7p") pod "900499d1-401f-47f7-8646-e86b1edcaece" (UID: "900499d1-401f-47f7-8646-e86b1edcaece"). InnerVolumeSpecName "kube-api-access-tmq7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.423472 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900499d1-401f-47f7-8646-e86b1edcaece-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "900499d1-401f-47f7-8646-e86b1edcaece" (UID: "900499d1-401f-47f7-8646-e86b1edcaece"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.520052 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/900499d1-401f-47f7-8646-e86b1edcaece-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.520360 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmq7p\" (UniqueName: \"kubernetes.io/projected/900499d1-401f-47f7-8646-e86b1edcaece-kube-api-access-tmq7p\") on node \"crc\" DevicePath \"\"" Jan 28 17:00:03 crc kubenswrapper[4903]: I0128 17:00:03.520467 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/900499d1-401f-47f7-8646-e86b1edcaece-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:00:04 crc kubenswrapper[4903]: I0128 17:00:04.018118 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" event={"ID":"900499d1-401f-47f7-8646-e86b1edcaece","Type":"ContainerDied","Data":"e520f80cc71b4123f238fee7c9c9f157c2fc2aa3a31fc97143bc22024078bd8f"} Jan 28 17:00:04 crc kubenswrapper[4903]: I0128 17:00:04.018370 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e520f80cc71b4123f238fee7c9c9f157c2fc2aa3a31fc97143bc22024078bd8f" Jan 28 17:00:04 crc kubenswrapper[4903]: I0128 17:00:04.018153 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg" Jan 28 17:00:04 crc kubenswrapper[4903]: I0128 17:00:04.362726 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c"] Jan 28 17:00:04 crc kubenswrapper[4903]: I0128 17:00:04.373090 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-g9f6c"] Jan 28 17:00:04 crc kubenswrapper[4903]: I0128 17:00:04.414284 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:00:04 crc kubenswrapper[4903]: E0128 17:00:04.414721 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:00:04 crc kubenswrapper[4903]: I0128 17:00:04.422061 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2421664-8bc4-4ab4-b292-2d0ed0db5585" path="/var/lib/kubelet/pods/a2421664-8bc4-4ab4-b292-2d0ed0db5585/volumes" Jan 28 17:00:15 crc kubenswrapper[4903]: I0128 17:00:15.413693 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:00:15 crc kubenswrapper[4903]: E0128 17:00:15.414940 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:00:26 crc kubenswrapper[4903]: I0128 17:00:26.413267 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:00:26 crc kubenswrapper[4903]: E0128 17:00:26.414155 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:00:37 crc kubenswrapper[4903]: I0128 17:00:37.413673 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:00:37 crc kubenswrapper[4903]: E0128 17:00:37.414619 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:00:49 crc kubenswrapper[4903]: I0128 17:00:49.414151 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:00:49 crc kubenswrapper[4903]: E0128 17:00:49.415068 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:00:53 crc kubenswrapper[4903]: I0128 17:00:53.071707 4903 scope.go:117] "RemoveContainer" containerID="5b79179d68af474d805b94af19d67f4050788b014b902e90b5a208811690bd59" Jan 28 17:01:02 crc kubenswrapper[4903]: I0128 17:01:02.414119 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:01:02 crc kubenswrapper[4903]: E0128 17:01:02.415085 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.307314 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jr9dz"] Jan 28 17:01:12 crc kubenswrapper[4903]: E0128 17:01:12.308193 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900499d1-401f-47f7-8646-e86b1edcaece" containerName="collect-profiles" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.308206 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="900499d1-401f-47f7-8646-e86b1edcaece" containerName="collect-profiles" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.308336 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="900499d1-401f-47f7-8646-e86b1edcaece" containerName="collect-profiles" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.309263 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.311735 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-catalog-content\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.311787 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgx8\" (UniqueName: \"kubernetes.io/projected/8408c8c4-bf82-4391-a7e3-437bd50906cf-kube-api-access-dsgx8\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.311899 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-utilities\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.319689 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jr9dz"] Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.413101 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-catalog-content\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.413165 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgx8\" (UniqueName: \"kubernetes.io/projected/8408c8c4-bf82-4391-a7e3-437bd50906cf-kube-api-access-dsgx8\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.413226 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-utilities\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.413729 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-catalog-content\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.414048 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-utilities\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.440424 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgx8\" (UniqueName: \"kubernetes.io/projected/8408c8c4-bf82-4391-a7e3-437bd50906cf-kube-api-access-dsgx8\") pod \"certified-operators-jr9dz\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.632481 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:12 crc kubenswrapper[4903]: I0128 17:01:12.934871 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jr9dz"] Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.413739 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:01:13 crc kubenswrapper[4903]: E0128 17:01:13.414259 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.503975 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr9dz" event={"ID":"8408c8c4-bf82-4391-a7e3-437bd50906cf","Type":"ContainerDied","Data":"e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45"} Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.503905 4903 generic.go:334] "Generic (PLEG): container finished" podID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerID="e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45" exitCode=0 Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.504098 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr9dz" event={"ID":"8408c8c4-bf82-4391-a7e3-437bd50906cf","Type":"ContainerStarted","Data":"7ab0295b24c236f162ea2e7664e14e0d03572d40b8461c1c793df7940d6751f6"} Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.510634 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5h2p"] Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.513512 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.528237 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5h2p"] Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.628271 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-catalog-content\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.628342 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-utilities\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.628649 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62d5\" (UniqueName: \"kubernetes.io/projected/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-kube-api-access-m62d5\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.729925 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-catalog-content\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.729999 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-utilities\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.730050 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62d5\" (UniqueName: \"kubernetes.io/projected/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-kube-api-access-m62d5\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.730556 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-catalog-content\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.730690 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-utilities\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.752648 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62d5\" (UniqueName: \"kubernetes.io/projected/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-kube-api-access-m62d5\") pod \"community-operators-v5h2p\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:13 crc kubenswrapper[4903]: I0128 17:01:13.831994 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:14 crc kubenswrapper[4903]: I0128 17:01:14.352935 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5h2p"] Jan 28 17:01:14 crc kubenswrapper[4903]: I0128 17:01:14.513455 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5h2p" event={"ID":"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b","Type":"ContainerStarted","Data":"3b246ee9eca81692ac728d6d0ef32991f81c75563ee4a61aa645d5c15491e767"} Jan 28 17:01:15 crc kubenswrapper[4903]: I0128 17:01:15.521905 4903 generic.go:334] "Generic (PLEG): container finished" podID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerID="4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70" exitCode=0 Jan 28 17:01:15 crc kubenswrapper[4903]: I0128 17:01:15.521991 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5h2p" event={"ID":"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b","Type":"ContainerDied","Data":"4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70"} Jan 28 17:01:15 crc kubenswrapper[4903]: I0128 17:01:15.524393 4903 generic.go:334] "Generic (PLEG): container finished" podID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerID="7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9" exitCode=0 Jan 28 17:01:15 crc kubenswrapper[4903]: I0128 17:01:15.524432 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr9dz" event={"ID":"8408c8c4-bf82-4391-a7e3-437bd50906cf","Type":"ContainerDied","Data":"7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9"} Jan 28 17:01:16 crc kubenswrapper[4903]: I0128 17:01:16.535610 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr9dz" event={"ID":"8408c8c4-bf82-4391-a7e3-437bd50906cf","Type":"ContainerStarted","Data":"d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4"} Jan 28 17:01:16 crc kubenswrapper[4903]: I0128 17:01:16.566616 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jr9dz" podStartSLOduration=2.071443557 podStartE2EDuration="4.566587268s" podCreationTimestamp="2026-01-28 17:01:12 +0000 UTC" firstStartedPulling="2026-01-28 17:01:13.505553448 +0000 UTC m=+4545.781524959" lastFinishedPulling="2026-01-28 17:01:16.000697169 +0000 UTC m=+4548.276668670" observedRunningTime="2026-01-28 17:01:16.562739204 +0000 UTC m=+4548.838710715" watchObservedRunningTime="2026-01-28 17:01:16.566587268 +0000 UTC m=+4548.842558779" Jan 28 17:01:20 crc kubenswrapper[4903]: I0128 17:01:20.566981 4903 generic.go:334] "Generic (PLEG): container finished" podID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerID="48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8" exitCode=0 Jan 28 17:01:20 crc kubenswrapper[4903]: I0128 17:01:20.567089 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5h2p" event={"ID":"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b","Type":"ContainerDied","Data":"48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8"} Jan 28 17:01:22 crc kubenswrapper[4903]: I0128 17:01:22.583820 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5h2p" event={"ID":"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b","Type":"ContainerStarted","Data":"566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777"} Jan 28 17:01:22 crc kubenswrapper[4903]: I0128 17:01:22.605444 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5h2p" podStartSLOduration=3.399297909 podStartE2EDuration="9.605422901s" podCreationTimestamp="2026-01-28 17:01:13 +0000 UTC" firstStartedPulling="2026-01-28 17:01:15.523583507 +0000 UTC m=+4547.799555018" lastFinishedPulling="2026-01-28 17:01:21.729708499 +0000 UTC m=+4554.005680010" observedRunningTime="2026-01-28 17:01:22.602000318 +0000 UTC m=+4554.877971839" watchObservedRunningTime="2026-01-28 17:01:22.605422901 +0000 UTC m=+4554.881394412" Jan 28 17:01:22 crc kubenswrapper[4903]: I0128 17:01:22.633117 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:22 crc kubenswrapper[4903]: I0128 17:01:22.633438 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:22 crc kubenswrapper[4903]: I0128 17:01:22.676306 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:23 crc kubenswrapper[4903]: I0128 17:01:23.654130 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:23 crc kubenswrapper[4903]: I0128 17:01:23.832508 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:23 crc kubenswrapper[4903]: I0128 17:01:23.832577 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:23 crc kubenswrapper[4903]: I0128 17:01:23.873711 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:24 crc kubenswrapper[4903]: I0128 17:01:24.093761 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jr9dz"] Jan 28 17:01:25 crc kubenswrapper[4903]: I0128 17:01:25.607519 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jr9dz" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="registry-server" containerID="cri-o://d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4" gracePeriod=2 Jan 28 17:01:26 crc kubenswrapper[4903]: I0128 17:01:26.414001 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:01:26 crc kubenswrapper[4903]: E0128 17:01:26.414783 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.418174 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.534727 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgx8\" (UniqueName: \"kubernetes.io/projected/8408c8c4-bf82-4391-a7e3-437bd50906cf-kube-api-access-dsgx8\") pod \"8408c8c4-bf82-4391-a7e3-437bd50906cf\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.535998 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-utilities\") pod \"8408c8c4-bf82-4391-a7e3-437bd50906cf\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.536037 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-catalog-content\") pod \"8408c8c4-bf82-4391-a7e3-437bd50906cf\" (UID: \"8408c8c4-bf82-4391-a7e3-437bd50906cf\") " Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.537881 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-utilities" (OuterVolumeSpecName: "utilities") pod "8408c8c4-bf82-4391-a7e3-437bd50906cf" (UID: "8408c8c4-bf82-4391-a7e3-437bd50906cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.542166 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8408c8c4-bf82-4391-a7e3-437bd50906cf-kube-api-access-dsgx8" (OuterVolumeSpecName: "kube-api-access-dsgx8") pod "8408c8c4-bf82-4391-a7e3-437bd50906cf" (UID: "8408c8c4-bf82-4391-a7e3-437bd50906cf"). InnerVolumeSpecName "kube-api-access-dsgx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.588762 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8408c8c4-bf82-4391-a7e3-437bd50906cf" (UID: "8408c8c4-bf82-4391-a7e3-437bd50906cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.628334 4903 generic.go:334] "Generic (PLEG): container finished" podID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerID="d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4" exitCode=0 Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.628399 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr9dz" event={"ID":"8408c8c4-bf82-4391-a7e3-437bd50906cf","Type":"ContainerDied","Data":"d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4"} Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.628432 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jr9dz" event={"ID":"8408c8c4-bf82-4391-a7e3-437bd50906cf","Type":"ContainerDied","Data":"7ab0295b24c236f162ea2e7664e14e0d03572d40b8461c1c793df7940d6751f6"} Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.628447 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jr9dz" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.628457 4903 scope.go:117] "RemoveContainer" containerID="d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.637805 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.637848 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8408c8c4-bf82-4391-a7e3-437bd50906cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.637862 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgx8\" (UniqueName: \"kubernetes.io/projected/8408c8c4-bf82-4391-a7e3-437bd50906cf-kube-api-access-dsgx8\") on node \"crc\" DevicePath \"\"" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.659894 4903 scope.go:117] "RemoveContainer" containerID="7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.666987 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jr9dz"] Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.673696 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jr9dz"] Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.682597 4903 scope.go:117] "RemoveContainer" containerID="e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.713147 4903 scope.go:117] "RemoveContainer" containerID="d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4" Jan 28 17:01:27 crc kubenswrapper[4903]: E0128 17:01:27.713614 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4\": container with ID starting with d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4 not found: ID does not exist" containerID="d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.713664 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4"} err="failed to get container status \"d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4\": rpc error: code = NotFound desc = could not find container \"d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4\": container with ID starting with d7e78006cd64ea5399e98ef14d599cc0e3675d172d667cdaadb0e5f70a53bbe4 not found: ID does not exist" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.713695 4903 scope.go:117] "RemoveContainer" containerID="7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9" Jan 28 17:01:27 crc kubenswrapper[4903]: E0128 17:01:27.714194 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9\": container with ID starting with 7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9 not found: ID does not exist" containerID="7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.714223 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9"} err="failed to get container status \"7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9\": rpc error: code = NotFound desc = could not find container \"7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9\": container with ID starting with 7376883da0ed7136e7039b99ee195bec9c15c1f7cdee4f44d26a293b69d2ffe9 not found: ID does not exist" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.714238 4903 scope.go:117] "RemoveContainer" containerID="e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45" Jan 28 17:01:27 crc kubenswrapper[4903]: E0128 17:01:27.714636 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45\": container with ID starting with e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45 not found: ID does not exist" containerID="e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45" Jan 28 17:01:27 crc kubenswrapper[4903]: I0128 17:01:27.714689 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45"} err="failed to get container status \"e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45\": rpc error: code = NotFound desc = could not find container \"e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45\": container with ID starting with e305cbcb31254a112da4bec54e1058cdd886e06051ad744c8a0a92e18c0f6b45 not found: ID does not exist" Jan 28 17:01:28 crc kubenswrapper[4903]: I0128 17:01:28.424516 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" path="/var/lib/kubelet/pods/8408c8c4-bf82-4391-a7e3-437bd50906cf/volumes" Jan 28 17:01:33 crc kubenswrapper[4903]: I0128 17:01:33.871257 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:01:33 crc kubenswrapper[4903]: I0128 17:01:33.941431 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5h2p"] Jan 28 17:01:33 crc kubenswrapper[4903]: I0128 17:01:33.974042 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2slh"] Jan 28 17:01:33 crc kubenswrapper[4903]: I0128 17:01:33.974298 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2slh" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="registry-server" containerID="cri-o://e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6" gracePeriod=2 Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.457592 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2slh" Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.629634 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-catalog-content\") pod \"205dcee3-f878-45d6-8b6d-9050cc045101\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.629792 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-utilities\") pod \"205dcee3-f878-45d6-8b6d-9050cc045101\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.629863 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf22k\" (UniqueName: \"kubernetes.io/projected/205dcee3-f878-45d6-8b6d-9050cc045101-kube-api-access-zf22k\") pod \"205dcee3-f878-45d6-8b6d-9050cc045101\" (UID: \"205dcee3-f878-45d6-8b6d-9050cc045101\") " Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.630332 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-utilities" (OuterVolumeSpecName: "utilities") pod "205dcee3-f878-45d6-8b6d-9050cc045101" (UID: "205dcee3-f878-45d6-8b6d-9050cc045101"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.637108 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205dcee3-f878-45d6-8b6d-9050cc045101-kube-api-access-zf22k" (OuterVolumeSpecName: "kube-api-access-zf22k") pod "205dcee3-f878-45d6-8b6d-9050cc045101" (UID: "205dcee3-f878-45d6-8b6d-9050cc045101"). InnerVolumeSpecName "kube-api-access-zf22k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.676219 4903 generic.go:334] "Generic (PLEG): container finished" podID="205dcee3-f878-45d6-8b6d-9050cc045101" containerID="e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6" exitCode=0 Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.676330 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2slh" event={"ID":"205dcee3-f878-45d6-8b6d-9050cc045101","Type":"ContainerDied","Data":"e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6"} Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.676407 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2slh" event={"ID":"205dcee3-f878-45d6-8b6d-9050cc045101","Type":"ContainerDied","Data":"1398427f8ac79367c616919ae6f786824277252d72d15c6a214cc96399d270af"} Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.676430 4903 scope.go:117] "RemoveContainer" containerID="e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6" Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.676729 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2slh" Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.732114 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf22k\" (UniqueName: \"kubernetes.io/projected/205dcee3-f878-45d6-8b6d-9050cc045101-kube-api-access-zf22k\") on node \"crc\" DevicePath \"\"" Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.732151 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:01:34 crc kubenswrapper[4903]: I0128 17:01:34.986640 4903 scope.go:117] "RemoveContainer" containerID="c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.008390 4903 scope.go:117] "RemoveContainer" containerID="e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.027232 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "205dcee3-f878-45d6-8b6d-9050cc045101" (UID: "205dcee3-f878-45d6-8b6d-9050cc045101"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.037269 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205dcee3-f878-45d6-8b6d-9050cc045101-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.049698 4903 scope.go:117] "RemoveContainer" containerID="e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6" Jan 28 17:01:35 crc kubenswrapper[4903]: E0128 17:01:35.050323 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6\": container with ID starting with e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6 not found: ID does not exist" containerID="e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.050381 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6"} err="failed to get container status \"e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6\": rpc error: code = NotFound desc = could not find container \"e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6\": container with ID starting with e45aac796799a006bbde7ce685a96560d68b1edd945985f12a612dab969e7bf6 not found: ID does not exist" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.050413 4903 scope.go:117] "RemoveContainer" containerID="c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618" Jan 28 17:01:35 crc kubenswrapper[4903]: E0128 17:01:35.051858 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618\": container with ID starting with c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618 not found: ID does not exist" containerID="c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.051882 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618"} err="failed to get container status \"c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618\": rpc error: code = NotFound desc = could not find container \"c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618\": container with ID starting with c3fcc30a313fdcf3807350e416b11bfcbe78f95a485bd3b59cf89e0ebdcbc618 not found: ID does not exist" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.051895 4903 scope.go:117] "RemoveContainer" containerID="e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc" Jan 28 17:01:35 crc kubenswrapper[4903]: E0128 17:01:35.052318 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc\": container with ID starting with e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc not found: ID does not exist" containerID="e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.052380 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc"} err="failed to get container status \"e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc\": rpc error: code = NotFound desc = could not find container \"e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc\": container with ID starting with e64c40de0950a24839357439aa3ec4821abf66fc83578769faa0c65a3fa281dc not found: ID does not exist" Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.307741 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2slh"] Jan 28 17:01:35 crc kubenswrapper[4903]: I0128 17:01:35.312473 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2slh"] Jan 28 17:01:36 crc kubenswrapper[4903]: I0128 17:01:36.424643 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" path="/var/lib/kubelet/pods/205dcee3-f878-45d6-8b6d-9050cc045101/volumes" Jan 28 17:01:39 crc kubenswrapper[4903]: I0128 17:01:39.412872 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:01:39 crc kubenswrapper[4903]: E0128 17:01:39.413347 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:01:52 crc kubenswrapper[4903]: I0128 17:01:52.413696 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:01:52 crc kubenswrapper[4903]: E0128 17:01:52.414374 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:02:05 crc kubenswrapper[4903]: I0128 17:02:05.413787 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:02:05 crc kubenswrapper[4903]: E0128 17:02:05.415613 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:02:18 crc kubenswrapper[4903]: I0128 17:02:18.421902 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:02:18 crc kubenswrapper[4903]: E0128 17:02:18.422846 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:02:32 crc kubenswrapper[4903]: I0128 17:02:32.413677 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:02:34 crc kubenswrapper[4903]: I0128 17:02:34.243496 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"87886688b115e3bd33272efeb9c1a2fd3a83c01034c6fff2375b5803ae4625f1"} Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.253857 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s8p7h"] Jan 28 17:02:46 crc kubenswrapper[4903]: E0128 17:02:46.256968 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="extract-utilities" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.257112 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="extract-utilities" Jan 28 17:02:46 crc kubenswrapper[4903]: E0128 17:02:46.257185 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="extract-utilities" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.257266 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="extract-utilities" Jan 28 17:02:46 crc kubenswrapper[4903]: E0128 17:02:46.257337 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="extract-content" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.257397 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="extract-content" Jan 28 17:02:46 crc kubenswrapper[4903]: E0128 17:02:46.257475 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="registry-server" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.257550 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="registry-server" Jan 28 17:02:46 crc kubenswrapper[4903]: E0128 17:02:46.257617 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="registry-server" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.257687 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="registry-server" Jan 28 17:02:46 crc kubenswrapper[4903]: E0128 17:02:46.257750 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="extract-content" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.257803 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="extract-content" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.258030 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="205dcee3-f878-45d6-8b6d-9050cc045101" containerName="registry-server" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.258108 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8408c8c4-bf82-4391-a7e3-437bd50906cf" containerName="registry-server" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.259188 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.268073 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8p7h"] Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.312110 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-catalog-content\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.312447 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvwq\" (UniqueName: \"kubernetes.io/projected/f304cb46-3659-4a9f-a0a4-75b844de6f65-kube-api-access-vgvwq\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.312583 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-utilities\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.413493 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvwq\" (UniqueName: \"kubernetes.io/projected/f304cb46-3659-4a9f-a0a4-75b844de6f65-kube-api-access-vgvwq\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.413668 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-utilities\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.413747 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-catalog-content\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.414247 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-catalog-content\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.414309 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-utilities\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.434489 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvwq\" (UniqueName: \"kubernetes.io/projected/f304cb46-3659-4a9f-a0a4-75b844de6f65-kube-api-access-vgvwq\") pod \"redhat-operators-s8p7h\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:46 crc kubenswrapper[4903]: I0128 17:02:46.583147 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:47 crc kubenswrapper[4903]: I0128 17:02:47.035036 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s8p7h"] Jan 28 17:02:47 crc kubenswrapper[4903]: I0128 17:02:47.327952 4903 generic.go:334] "Generic (PLEG): container finished" podID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerID="7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620" exitCode=0 Jan 28 17:02:47 crc kubenswrapper[4903]: I0128 17:02:47.328050 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8p7h" event={"ID":"f304cb46-3659-4a9f-a0a4-75b844de6f65","Type":"ContainerDied","Data":"7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620"} Jan 28 17:02:47 crc kubenswrapper[4903]: I0128 17:02:47.328349 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8p7h" event={"ID":"f304cb46-3659-4a9f-a0a4-75b844de6f65","Type":"ContainerStarted","Data":"4b48538fd858c9ac9e808e3c275bc2bf5d15c14e2f937cfa491e49d085cdca9e"} Jan 28 17:02:47 crc kubenswrapper[4903]: I0128 17:02:47.331344 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:02:49 crc kubenswrapper[4903]: I0128 17:02:49.346864 4903 generic.go:334] "Generic (PLEG): container finished" podID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerID="7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d" exitCode=0 Jan 28 17:02:49 crc kubenswrapper[4903]: I0128 17:02:49.347248 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8p7h" event={"ID":"f304cb46-3659-4a9f-a0a4-75b844de6f65","Type":"ContainerDied","Data":"7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d"} Jan 28 17:02:50 crc kubenswrapper[4903]: I0128 17:02:50.356995 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8p7h" event={"ID":"f304cb46-3659-4a9f-a0a4-75b844de6f65","Type":"ContainerStarted","Data":"70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a"} Jan 28 17:02:50 crc kubenswrapper[4903]: I0128 17:02:50.376510 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s8p7h" podStartSLOduration=1.825448408 podStartE2EDuration="4.376489662s" podCreationTimestamp="2026-01-28 17:02:46 +0000 UTC" firstStartedPulling="2026-01-28 17:02:47.331073326 +0000 UTC m=+4639.607044837" lastFinishedPulling="2026-01-28 17:02:49.88211458 +0000 UTC m=+4642.158086091" observedRunningTime="2026-01-28 17:02:50.375515666 +0000 UTC m=+4642.651487177" watchObservedRunningTime="2026-01-28 17:02:50.376489662 +0000 UTC m=+4642.652461183" Jan 28 17:02:56 crc kubenswrapper[4903]: I0128 17:02:56.583420 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:56 crc kubenswrapper[4903]: I0128 17:02:56.584096 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:56 crc kubenswrapper[4903]: I0128 17:02:56.638501 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:57 crc kubenswrapper[4903]: I0128 17:02:57.442857 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:02:57 crc kubenswrapper[4903]: I0128 17:02:57.496256 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8p7h"] Jan 28 17:02:59 crc kubenswrapper[4903]: I0128 17:02:59.414127 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s8p7h" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="registry-server" containerID="cri-o://70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a" gracePeriod=2 Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.335855 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.407420 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-utilities\") pod \"f304cb46-3659-4a9f-a0a4-75b844de6f65\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.407624 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-catalog-content\") pod \"f304cb46-3659-4a9f-a0a4-75b844de6f65\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.407660 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgvwq\" (UniqueName: \"kubernetes.io/projected/f304cb46-3659-4a9f-a0a4-75b844de6f65-kube-api-access-vgvwq\") pod \"f304cb46-3659-4a9f-a0a4-75b844de6f65\" (UID: \"f304cb46-3659-4a9f-a0a4-75b844de6f65\") " Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.408561 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-utilities" (OuterVolumeSpecName: "utilities") pod "f304cb46-3659-4a9f-a0a4-75b844de6f65" (UID: "f304cb46-3659-4a9f-a0a4-75b844de6f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.413786 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f304cb46-3659-4a9f-a0a4-75b844de6f65-kube-api-access-vgvwq" (OuterVolumeSpecName: "kube-api-access-vgvwq") pod "f304cb46-3659-4a9f-a0a4-75b844de6f65" (UID: "f304cb46-3659-4a9f-a0a4-75b844de6f65"). InnerVolumeSpecName "kube-api-access-vgvwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.425948 4903 generic.go:334] "Generic (PLEG): container finished" podID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerID="70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a" exitCode=0 Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.426037 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s8p7h" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.437333 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8p7h" event={"ID":"f304cb46-3659-4a9f-a0a4-75b844de6f65","Type":"ContainerDied","Data":"70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a"} Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.437383 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s8p7h" event={"ID":"f304cb46-3659-4a9f-a0a4-75b844de6f65","Type":"ContainerDied","Data":"4b48538fd858c9ac9e808e3c275bc2bf5d15c14e2f937cfa491e49d085cdca9e"} Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.437402 4903 scope.go:117] "RemoveContainer" containerID="70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.463748 4903 scope.go:117] "RemoveContainer" containerID="7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.482494 4903 scope.go:117] "RemoveContainer" containerID="7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.505494 4903 scope.go:117] "RemoveContainer" containerID="70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a" Jan 28 17:03:00 crc kubenswrapper[4903]: E0128 17:03:00.506025 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a\": container with ID starting with 70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a not found: ID does not exist" containerID="70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.506054 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a"} err="failed to get container status \"70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a\": rpc error: code = NotFound desc = could not find container \"70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a\": container with ID starting with 70f922ce652d89f7d14c62d2082ac710fc3b79d55b55334a64fa1693db8ba78a not found: ID does not exist" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.506074 4903 scope.go:117] "RemoveContainer" containerID="7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d" Jan 28 17:03:00 crc kubenswrapper[4903]: E0128 17:03:00.506373 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d\": container with ID starting with 7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d not found: ID does not exist" containerID="7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.506419 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d"} err="failed to get container status \"7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d\": rpc error: code = NotFound desc = could not find container \"7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d\": container with ID starting with 7ea5123eee089530ad7654f97b7c44a41ad4c562bc4e615751e30ba17e31a74d not found: ID does not exist" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.506449 4903 scope.go:117] "RemoveContainer" containerID="7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620" Jan 28 17:03:00 crc kubenswrapper[4903]: E0128 17:03:00.506724 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620\": container with ID starting with 7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620 not found: ID does not exist" containerID="7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.506761 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620"} err="failed to get container status \"7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620\": rpc error: code = NotFound desc = could not find container \"7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620\": container with ID starting with 7dba5765426e2eae4f668ab73543cec8d7b1bf250af50d4f649b1d3c7108d620 not found: ID does not exist" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.508753 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.508782 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgvwq\" (UniqueName: \"kubernetes.io/projected/f304cb46-3659-4a9f-a0a4-75b844de6f65-kube-api-access-vgvwq\") on node \"crc\" DevicePath \"\"" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.546098 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f304cb46-3659-4a9f-a0a4-75b844de6f65" (UID: "f304cb46-3659-4a9f-a0a4-75b844de6f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.609912 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f304cb46-3659-4a9f-a0a4-75b844de6f65-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.757099 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s8p7h"] Jan 28 17:03:00 crc kubenswrapper[4903]: I0128 17:03:00.764580 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s8p7h"] Jan 28 17:03:02 crc kubenswrapper[4903]: I0128 17:03:02.422808 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" path="/var/lib/kubelet/pods/f304cb46-3659-4a9f-a0a4-75b844de6f65/volumes" Jan 28 17:04:56 crc kubenswrapper[4903]: I0128 17:04:56.613602 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:04:56 crc kubenswrapper[4903]: I0128 17:04:56.614713 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:05:26 crc kubenswrapper[4903]: I0128 17:05:26.613837 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:05:26 crc kubenswrapper[4903]: I0128 17:05:26.614551 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:05:56 crc kubenswrapper[4903]: I0128 17:05:56.613936 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:05:56 crc kubenswrapper[4903]: I0128 17:05:56.614594 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:05:56 crc kubenswrapper[4903]: I0128 17:05:56.614653 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:05:56 crc kubenswrapper[4903]: I0128 17:05:56.615390 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87886688b115e3bd33272efeb9c1a2fd3a83c01034c6fff2375b5803ae4625f1"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:05:56 crc kubenswrapper[4903]: I0128 17:05:56.615464 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://87886688b115e3bd33272efeb9c1a2fd3a83c01034c6fff2375b5803ae4625f1" gracePeriod=600 Jan 28 17:05:57 crc kubenswrapper[4903]: I0128 17:05:57.265913 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="87886688b115e3bd33272efeb9c1a2fd3a83c01034c6fff2375b5803ae4625f1" exitCode=0 Jan 28 17:05:57 crc kubenswrapper[4903]: I0128 17:05:57.265980 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"87886688b115e3bd33272efeb9c1a2fd3a83c01034c6fff2375b5803ae4625f1"} Jan 28 17:05:57 crc kubenswrapper[4903]: I0128 17:05:57.266279 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7"} Jan 28 17:05:57 crc kubenswrapper[4903]: I0128 17:05:57.266302 4903 scope.go:117] "RemoveContainer" containerID="d2f1b3c1803ee7ce7b1136384d7b6437bb5ca465b3c65e0405637beb24d2aea7" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.040279 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-mv7nd"] Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.046087 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-mv7nd"] Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.167466 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kgkjm"] Jan 28 17:06:24 crc kubenswrapper[4903]: E0128 17:06:24.167893 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="registry-server" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.167927 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="registry-server" Jan 28 17:06:24 crc kubenswrapper[4903]: E0128 17:06:24.167949 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="extract-content" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.167960 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="extract-content" Jan 28 17:06:24 crc kubenswrapper[4903]: E0128 17:06:24.167979 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="extract-utilities" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.167990 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="extract-utilities" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.168243 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304cb46-3659-4a9f-a0a4-75b844de6f65" containerName="registry-server" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.169083 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.172953 4903 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-f5998" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.173144 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.173398 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.173510 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.176813 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kgkjm"] Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.267625 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a38d6f9-64cf-4584-8c83-3a22923ed08c-crc-storage\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.267695 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a38d6f9-64cf-4584-8c83-3a22923ed08c-node-mnt\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.267720 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ns75\" (UniqueName: \"kubernetes.io/projected/0a38d6f9-64cf-4584-8c83-3a22923ed08c-kube-api-access-5ns75\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.368859 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a38d6f9-64cf-4584-8c83-3a22923ed08c-crc-storage\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.369236 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a38d6f9-64cf-4584-8c83-3a22923ed08c-node-mnt\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.369419 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ns75\" (UniqueName: \"kubernetes.io/projected/0a38d6f9-64cf-4584-8c83-3a22923ed08c-kube-api-access-5ns75\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.369656 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a38d6f9-64cf-4584-8c83-3a22923ed08c-node-mnt\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.369982 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a38d6f9-64cf-4584-8c83-3a22923ed08c-crc-storage\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.405727 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ns75\" (UniqueName: \"kubernetes.io/projected/0a38d6f9-64cf-4584-8c83-3a22923ed08c-kube-api-access-5ns75\") pod \"crc-storage-crc-kgkjm\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.423943 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78107eb1-8fa0-4870-92ea-da8fc6a4eaa3" path="/var/lib/kubelet/pods/78107eb1-8fa0-4870-92ea-da8fc6a4eaa3/volumes" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.492445 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:24 crc kubenswrapper[4903]: I0128 17:06:24.925543 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kgkjm"] Jan 28 17:06:25 crc kubenswrapper[4903]: I0128 17:06:25.474470 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kgkjm" event={"ID":"0a38d6f9-64cf-4584-8c83-3a22923ed08c","Type":"ContainerStarted","Data":"299cd9c8cd6d8cf1fdfaa5584dc7761f12cf235ae92f842aae92203c678a21f7"} Jan 28 17:06:26 crc kubenswrapper[4903]: I0128 17:06:26.482491 4903 generic.go:334] "Generic (PLEG): container finished" podID="0a38d6f9-64cf-4584-8c83-3a22923ed08c" containerID="6552795f881fdafbf0d4f31aeaedf1f97397c010f172d6d47b50feebdc1acba4" exitCode=0 Jan 28 17:06:26 crc kubenswrapper[4903]: I0128 17:06:26.482779 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kgkjm" event={"ID":"0a38d6f9-64cf-4584-8c83-3a22923ed08c","Type":"ContainerDied","Data":"6552795f881fdafbf0d4f31aeaedf1f97397c010f172d6d47b50feebdc1acba4"} Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.764670 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.915671 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a38d6f9-64cf-4584-8c83-3a22923ed08c-node-mnt\") pod \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.915797 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a38d6f9-64cf-4584-8c83-3a22923ed08c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "0a38d6f9-64cf-4584-8c83-3a22923ed08c" (UID: "0a38d6f9-64cf-4584-8c83-3a22923ed08c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.916344 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a38d6f9-64cf-4584-8c83-3a22923ed08c-crc-storage\") pod \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.916489 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ns75\" (UniqueName: \"kubernetes.io/projected/0a38d6f9-64cf-4584-8c83-3a22923ed08c-kube-api-access-5ns75\") pod \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\" (UID: \"0a38d6f9-64cf-4584-8c83-3a22923ed08c\") " Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.917033 4903 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/0a38d6f9-64cf-4584-8c83-3a22923ed08c-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.921336 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a38d6f9-64cf-4584-8c83-3a22923ed08c-kube-api-access-5ns75" (OuterVolumeSpecName: "kube-api-access-5ns75") pod "0a38d6f9-64cf-4584-8c83-3a22923ed08c" (UID: "0a38d6f9-64cf-4584-8c83-3a22923ed08c"). InnerVolumeSpecName "kube-api-access-5ns75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:06:27 crc kubenswrapper[4903]: I0128 17:06:27.932587 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a38d6f9-64cf-4584-8c83-3a22923ed08c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "0a38d6f9-64cf-4584-8c83-3a22923ed08c" (UID: "0a38d6f9-64cf-4584-8c83-3a22923ed08c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:06:28 crc kubenswrapper[4903]: I0128 17:06:28.018048 4903 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/0a38d6f9-64cf-4584-8c83-3a22923ed08c-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 28 17:06:28 crc kubenswrapper[4903]: I0128 17:06:28.018099 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ns75\" (UniqueName: \"kubernetes.io/projected/0a38d6f9-64cf-4584-8c83-3a22923ed08c-kube-api-access-5ns75\") on node \"crc\" DevicePath \"\"" Jan 28 17:06:28 crc kubenswrapper[4903]: I0128 17:06:28.497000 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kgkjm" event={"ID":"0a38d6f9-64cf-4584-8c83-3a22923ed08c","Type":"ContainerDied","Data":"299cd9c8cd6d8cf1fdfaa5584dc7761f12cf235ae92f842aae92203c678a21f7"} Jan 28 17:06:28 crc kubenswrapper[4903]: I0128 17:06:28.497046 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="299cd9c8cd6d8cf1fdfaa5584dc7761f12cf235ae92f842aae92203c678a21f7" Jan 28 17:06:28 crc kubenswrapper[4903]: I0128 17:06:28.497102 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kgkjm" Jan 28 17:06:29 crc kubenswrapper[4903]: I0128 17:06:29.980657 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kgkjm"] Jan 28 17:06:29 crc kubenswrapper[4903]: I0128 17:06:29.986447 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kgkjm"] Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.122400 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-g8rpg"] Jan 28 17:06:30 crc kubenswrapper[4903]: E0128 17:06:30.122720 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a38d6f9-64cf-4584-8c83-3a22923ed08c" containerName="storage" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.122739 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a38d6f9-64cf-4584-8c83-3a22923ed08c" containerName="storage" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.122878 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a38d6f9-64cf-4584-8c83-3a22923ed08c" containerName="storage" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.123297 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.130376 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.130495 4903 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-f5998" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.130653 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.130717 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.132346 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-g8rpg"] Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.246902 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlmt\" (UniqueName: \"kubernetes.io/projected/a8c009f7-fee9-4754-8553-f9d5c7ada282-kube-api-access-vhlmt\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.247001 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a8c009f7-fee9-4754-8553-f9d5c7ada282-node-mnt\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.247032 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a8c009f7-fee9-4754-8553-f9d5c7ada282-crc-storage\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.348823 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a8c009f7-fee9-4754-8553-f9d5c7ada282-node-mnt\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.348926 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a8c009f7-fee9-4754-8553-f9d5c7ada282-crc-storage\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.348966 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhlmt\" (UniqueName: \"kubernetes.io/projected/a8c009f7-fee9-4754-8553-f9d5c7ada282-kube-api-access-vhlmt\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.349198 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a8c009f7-fee9-4754-8553-f9d5c7ada282-node-mnt\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.349911 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a8c009f7-fee9-4754-8553-f9d5c7ada282-crc-storage\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.383351 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhlmt\" (UniqueName: \"kubernetes.io/projected/a8c009f7-fee9-4754-8553-f9d5c7ada282-kube-api-access-vhlmt\") pod \"crc-storage-crc-g8rpg\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.438215 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a38d6f9-64cf-4584-8c83-3a22923ed08c" path="/var/lib/kubelet/pods/0a38d6f9-64cf-4584-8c83-3a22923ed08c/volumes" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.444453 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:30 crc kubenswrapper[4903]: I0128 17:06:30.900635 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-g8rpg"] Jan 28 17:06:31 crc kubenswrapper[4903]: I0128 17:06:31.519028 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g8rpg" event={"ID":"a8c009f7-fee9-4754-8553-f9d5c7ada282","Type":"ContainerStarted","Data":"fcba3156c5c773399b7a444865a7c8fd5cd4dd7061572b9ae301d5de767e4156"} Jan 28 17:06:32 crc kubenswrapper[4903]: I0128 17:06:32.529918 4903 generic.go:334] "Generic (PLEG): container finished" podID="a8c009f7-fee9-4754-8553-f9d5c7ada282" containerID="428cc0def2d2ae1c544be4c3254b7f970f848b960f9316e35c3bf2a375bf38db" exitCode=0 Jan 28 17:06:32 crc kubenswrapper[4903]: I0128 17:06:32.530097 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g8rpg" event={"ID":"a8c009f7-fee9-4754-8553-f9d5c7ada282","Type":"ContainerDied","Data":"428cc0def2d2ae1c544be4c3254b7f970f848b960f9316e35c3bf2a375bf38db"} Jan 28 17:06:33 crc kubenswrapper[4903]: I0128 17:06:33.819420 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.003087 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a8c009f7-fee9-4754-8553-f9d5c7ada282-crc-storage\") pod \"a8c009f7-fee9-4754-8553-f9d5c7ada282\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.003565 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a8c009f7-fee9-4754-8553-f9d5c7ada282-node-mnt\") pod \"a8c009f7-fee9-4754-8553-f9d5c7ada282\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.003592 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhlmt\" (UniqueName: \"kubernetes.io/projected/a8c009f7-fee9-4754-8553-f9d5c7ada282-kube-api-access-vhlmt\") pod \"a8c009f7-fee9-4754-8553-f9d5c7ada282\" (UID: \"a8c009f7-fee9-4754-8553-f9d5c7ada282\") " Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.003702 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8c009f7-fee9-4754-8553-f9d5c7ada282-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a8c009f7-fee9-4754-8553-f9d5c7ada282" (UID: "a8c009f7-fee9-4754-8553-f9d5c7ada282"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.003942 4903 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a8c009f7-fee9-4754-8553-f9d5c7ada282-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.008611 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c009f7-fee9-4754-8553-f9d5c7ada282-kube-api-access-vhlmt" (OuterVolumeSpecName: "kube-api-access-vhlmt") pod "a8c009f7-fee9-4754-8553-f9d5c7ada282" (UID: "a8c009f7-fee9-4754-8553-f9d5c7ada282"). InnerVolumeSpecName "kube-api-access-vhlmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.025338 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8c009f7-fee9-4754-8553-f9d5c7ada282-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a8c009f7-fee9-4754-8553-f9d5c7ada282" (UID: "a8c009f7-fee9-4754-8553-f9d5c7ada282"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.105077 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhlmt\" (UniqueName: \"kubernetes.io/projected/a8c009f7-fee9-4754-8553-f9d5c7ada282-kube-api-access-vhlmt\") on node \"crc\" DevicePath \"\"" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.105120 4903 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a8c009f7-fee9-4754-8553-f9d5c7ada282-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.545645 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g8rpg" event={"ID":"a8c009f7-fee9-4754-8553-f9d5c7ada282","Type":"ContainerDied","Data":"fcba3156c5c773399b7a444865a7c8fd5cd4dd7061572b9ae301d5de767e4156"} Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.545762 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g8rpg" Jan 28 17:06:34 crc kubenswrapper[4903]: I0128 17:06:34.545961 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcba3156c5c773399b7a444865a7c8fd5cd4dd7061572b9ae301d5de767e4156" Jan 28 17:06:53 crc kubenswrapper[4903]: I0128 17:06:53.226551 4903 scope.go:117] "RemoveContainer" containerID="89e0e94569fdb1a2ecfdf82c46029c6cc57531549c132da9fede2bee5538c6f0" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.309924 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btxbf"] Jan 28 17:07:22 crc kubenswrapper[4903]: E0128 17:07:22.310791 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c009f7-fee9-4754-8553-f9d5c7ada282" containerName="storage" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.310804 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c009f7-fee9-4754-8553-f9d5c7ada282" containerName="storage" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.310947 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c009f7-fee9-4754-8553-f9d5c7ada282" containerName="storage" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.311964 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.316436 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btxbf"] Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.404758 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-utilities\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.405006 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-catalog-content\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.405059 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fgq\" (UniqueName: \"kubernetes.io/projected/b7dbfa8f-aaab-4724-a78b-cea4ee565972-kube-api-access-66fgq\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.507142 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-catalog-content\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.507220 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66fgq\" (UniqueName: \"kubernetes.io/projected/b7dbfa8f-aaab-4724-a78b-cea4ee565972-kube-api-access-66fgq\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.507341 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-utilities\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.507748 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-catalog-content\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.507845 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-utilities\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.527781 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66fgq\" (UniqueName: \"kubernetes.io/projected/b7dbfa8f-aaab-4724-a78b-cea4ee565972-kube-api-access-66fgq\") pod \"redhat-marketplace-btxbf\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.637206 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:22 crc kubenswrapper[4903]: I0128 17:07:22.898341 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btxbf"] Jan 28 17:07:23 crc kubenswrapper[4903]: I0128 17:07:23.895583 4903 generic.go:334] "Generic (PLEG): container finished" podID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerID="33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b" exitCode=0 Jan 28 17:07:23 crc kubenswrapper[4903]: I0128 17:07:23.895685 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btxbf" event={"ID":"b7dbfa8f-aaab-4724-a78b-cea4ee565972","Type":"ContainerDied","Data":"33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b"} Jan 28 17:07:23 crc kubenswrapper[4903]: I0128 17:07:23.896110 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btxbf" event={"ID":"b7dbfa8f-aaab-4724-a78b-cea4ee565972","Type":"ContainerStarted","Data":"11168e5404d202a3762d26d04909ec5323bd63f1cb783ac67e6ec0e1d281b1cc"} Jan 28 17:07:25 crc kubenswrapper[4903]: I0128 17:07:25.912889 4903 generic.go:334] "Generic (PLEG): container finished" podID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerID="1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1" exitCode=0 Jan 28 17:07:25 crc kubenswrapper[4903]: I0128 17:07:25.912953 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btxbf" event={"ID":"b7dbfa8f-aaab-4724-a78b-cea4ee565972","Type":"ContainerDied","Data":"1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1"} Jan 28 17:07:26 crc kubenswrapper[4903]: I0128 17:07:26.923271 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btxbf" event={"ID":"b7dbfa8f-aaab-4724-a78b-cea4ee565972","Type":"ContainerStarted","Data":"d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136"} Jan 28 17:07:27 crc kubenswrapper[4903]: I0128 17:07:27.947352 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btxbf" podStartSLOduration=3.066944203 podStartE2EDuration="5.947334471s" podCreationTimestamp="2026-01-28 17:07:22 +0000 UTC" firstStartedPulling="2026-01-28 17:07:23.898831043 +0000 UTC m=+4916.174802554" lastFinishedPulling="2026-01-28 17:07:26.779221311 +0000 UTC m=+4919.055192822" observedRunningTime="2026-01-28 17:07:27.944303219 +0000 UTC m=+4920.220274740" watchObservedRunningTime="2026-01-28 17:07:27.947334471 +0000 UTC m=+4920.223305982" Jan 28 17:07:32 crc kubenswrapper[4903]: I0128 17:07:32.638480 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:32 crc kubenswrapper[4903]: I0128 17:07:32.639083 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:32 crc kubenswrapper[4903]: I0128 17:07:32.684839 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:32 crc kubenswrapper[4903]: I0128 17:07:32.999318 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:33 crc kubenswrapper[4903]: I0128 17:07:33.047228 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btxbf"] Jan 28 17:07:34 crc kubenswrapper[4903]: I0128 17:07:34.975660 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btxbf" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="registry-server" containerID="cri-o://d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136" gracePeriod=2 Jan 28 17:07:35 crc kubenswrapper[4903]: I0128 17:07:35.924903 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:35 crc kubenswrapper[4903]: I0128 17:07:35.983869 4903 generic.go:334] "Generic (PLEG): container finished" podID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerID="d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136" exitCode=0 Jan 28 17:07:35 crc kubenswrapper[4903]: I0128 17:07:35.983915 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btxbf" event={"ID":"b7dbfa8f-aaab-4724-a78b-cea4ee565972","Type":"ContainerDied","Data":"d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136"} Jan 28 17:07:35 crc kubenswrapper[4903]: I0128 17:07:35.983921 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btxbf" Jan 28 17:07:35 crc kubenswrapper[4903]: I0128 17:07:35.983941 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btxbf" event={"ID":"b7dbfa8f-aaab-4724-a78b-cea4ee565972","Type":"ContainerDied","Data":"11168e5404d202a3762d26d04909ec5323bd63f1cb783ac67e6ec0e1d281b1cc"} Jan 28 17:07:35 crc kubenswrapper[4903]: I0128 17:07:35.983957 4903 scope.go:117] "RemoveContainer" containerID="d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.002821 4903 scope.go:117] "RemoveContainer" containerID="1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.022129 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-catalog-content\") pod \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.022316 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-utilities\") pod \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.022414 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66fgq\" (UniqueName: \"kubernetes.io/projected/b7dbfa8f-aaab-4724-a78b-cea4ee565972-kube-api-access-66fgq\") pod \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\" (UID: \"b7dbfa8f-aaab-4724-a78b-cea4ee565972\") " Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.023386 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-utilities" (OuterVolumeSpecName: "utilities") pod "b7dbfa8f-aaab-4724-a78b-cea4ee565972" (UID: "b7dbfa8f-aaab-4724-a78b-cea4ee565972"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.024576 4903 scope.go:117] "RemoveContainer" containerID="33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.028881 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7dbfa8f-aaab-4724-a78b-cea4ee565972-kube-api-access-66fgq" (OuterVolumeSpecName: "kube-api-access-66fgq") pod "b7dbfa8f-aaab-4724-a78b-cea4ee565972" (UID: "b7dbfa8f-aaab-4724-a78b-cea4ee565972"). InnerVolumeSpecName "kube-api-access-66fgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.054200 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7dbfa8f-aaab-4724-a78b-cea4ee565972" (UID: "b7dbfa8f-aaab-4724-a78b-cea4ee565972"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.070762 4903 scope.go:117] "RemoveContainer" containerID="d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136" Jan 28 17:07:36 crc kubenswrapper[4903]: E0128 17:07:36.071251 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136\": container with ID starting with d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136 not found: ID does not exist" containerID="d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.071297 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136"} err="failed to get container status \"d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136\": rpc error: code = NotFound desc = could not find container \"d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136\": container with ID starting with d4911ef609b8e6e876a5dc61860ad5cb4867aa2db820e460020cde19b5d91136 not found: ID does not exist" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.071323 4903 scope.go:117] "RemoveContainer" containerID="1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1" Jan 28 17:07:36 crc kubenswrapper[4903]: E0128 17:07:36.071642 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1\": container with ID starting with 1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1 not found: ID does not exist" containerID="1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.071674 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1"} err="failed to get container status \"1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1\": rpc error: code = NotFound desc = could not find container \"1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1\": container with ID starting with 1fe944d491c8384d496b719114b3b31a0eb5da5bace9bd158d290d6f79f91ad1 not found: ID does not exist" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.071698 4903 scope.go:117] "RemoveContainer" containerID="33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b" Jan 28 17:07:36 crc kubenswrapper[4903]: E0128 17:07:36.072020 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b\": container with ID starting with 33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b not found: ID does not exist" containerID="33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.072049 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b"} err="failed to get container status \"33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b\": rpc error: code = NotFound desc = could not find container \"33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b\": container with ID starting with 33ec05cc823b7088a84c71579884d5d93ad7f86bb395830d760e2279688dfe7b not found: ID does not exist" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.123990 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66fgq\" (UniqueName: \"kubernetes.io/projected/b7dbfa8f-aaab-4724-a78b-cea4ee565972-kube-api-access-66fgq\") on node \"crc\" DevicePath \"\"" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.124037 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.124047 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7dbfa8f-aaab-4724-a78b-cea4ee565972-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.320424 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btxbf"] Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.326285 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btxbf"] Jan 28 17:07:36 crc kubenswrapper[4903]: I0128 17:07:36.422994 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" path="/var/lib/kubelet/pods/b7dbfa8f-aaab-4724-a78b-cea4ee565972/volumes" Jan 28 17:07:56 crc kubenswrapper[4903]: I0128 17:07:56.613967 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:07:56 crc kubenswrapper[4903]: I0128 17:07:56.614609 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:08:26 crc kubenswrapper[4903]: I0128 17:08:26.613596 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:08:26 crc kubenswrapper[4903]: I0128 17:08:26.614158 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.518577 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-xc6km"] Jan 28 17:08:36 crc kubenswrapper[4903]: E0128 17:08:36.519520 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="registry-server" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.519558 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="registry-server" Jan 28 17:08:36 crc kubenswrapper[4903]: E0128 17:08:36.519580 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="extract-content" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.519587 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="extract-content" Jan 28 17:08:36 crc kubenswrapper[4903]: E0128 17:08:36.519614 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="extract-utilities" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.519623 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="extract-utilities" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.519793 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7dbfa8f-aaab-4724-a78b-cea4ee565972" containerName="registry-server" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.520582 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.523403 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r27gr" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.523475 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.523734 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.523908 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.524754 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.525380 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-xffxt"] Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.526709 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.532508 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-xc6km"] Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.559660 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-xffxt"] Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.694959 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-config\") pod \"dnsmasq-dns-5986db9b4f-xffxt\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.695034 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57pk5\" (UniqueName: \"kubernetes.io/projected/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-kube-api-access-57pk5\") pod \"dnsmasq-dns-5986db9b4f-xffxt\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.695071 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.695137 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77rz\" (UniqueName: \"kubernetes.io/projected/2f511e91-7e5d-4a21-b39f-e56cc537612e-kube-api-access-j77rz\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.695168 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-config\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.718382 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-xc6km"] Jan 28 17:08:36 crc kubenswrapper[4903]: E0128 17:08:36.718964 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-j77rz], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" podUID="2f511e91-7e5d-4a21-b39f-e56cc537612e" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.751862 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-ksccs"] Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.753255 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.767415 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-ksccs"] Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.796490 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-config\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.797282 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-dns-svc\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.797342 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb84m\" (UniqueName: \"kubernetes.io/projected/5fe9f5bf-29a4-4045-be1e-82dab38b2560-kube-api-access-wb84m\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.797371 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-config\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.797418 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-config\") pod \"dnsmasq-dns-5986db9b4f-xffxt\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.797500 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57pk5\" (UniqueName: \"kubernetes.io/projected/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-kube-api-access-57pk5\") pod \"dnsmasq-dns-5986db9b4f-xffxt\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.797557 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.797688 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j77rz\" (UniqueName: \"kubernetes.io/projected/2f511e91-7e5d-4a21-b39f-e56cc537612e-kube-api-access-j77rz\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.798406 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-config\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.799343 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.799579 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-config\") pod \"dnsmasq-dns-5986db9b4f-xffxt\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.822608 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57pk5\" (UniqueName: \"kubernetes.io/projected/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-kube-api-access-57pk5\") pod \"dnsmasq-dns-5986db9b4f-xffxt\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.852377 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77rz\" (UniqueName: \"kubernetes.io/projected/2f511e91-7e5d-4a21-b39f-e56cc537612e-kube-api-access-j77rz\") pod \"dnsmasq-dns-56bbd59dc5-xc6km\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.856202 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.898335 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-dns-svc\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.898406 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb84m\" (UniqueName: \"kubernetes.io/projected/5fe9f5bf-29a4-4045-be1e-82dab38b2560-kube-api-access-wb84m\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.898432 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-config\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.899498 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-config\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.899655 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-dns-svc\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.922944 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.926171 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb84m\" (UniqueName: \"kubernetes.io/projected/5fe9f5bf-29a4-4045-be1e-82dab38b2560-kube-api-access-wb84m\") pod \"dnsmasq-dns-865d9b578f-ksccs\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:36 crc kubenswrapper[4903]: I0128 17:08:36.982113 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.000658 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-config\") pod \"2f511e91-7e5d-4a21-b39f-e56cc537612e\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.001268 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j77rz\" (UniqueName: \"kubernetes.io/projected/2f511e91-7e5d-4a21-b39f-e56cc537612e-kube-api-access-j77rz\") pod \"2f511e91-7e5d-4a21-b39f-e56cc537612e\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.001349 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-dns-svc\") pod \"2f511e91-7e5d-4a21-b39f-e56cc537612e\" (UID: \"2f511e91-7e5d-4a21-b39f-e56cc537612e\") " Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.001401 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-config" (OuterVolumeSpecName: "config") pod "2f511e91-7e5d-4a21-b39f-e56cc537612e" (UID: "2f511e91-7e5d-4a21-b39f-e56cc537612e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.001679 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.002346 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f511e91-7e5d-4a21-b39f-e56cc537612e" (UID: "2f511e91-7e5d-4a21-b39f-e56cc537612e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.030773 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f511e91-7e5d-4a21-b39f-e56cc537612e-kube-api-access-j77rz" (OuterVolumeSpecName: "kube-api-access-j77rz") pod "2f511e91-7e5d-4a21-b39f-e56cc537612e" (UID: "2f511e91-7e5d-4a21-b39f-e56cc537612e"). InnerVolumeSpecName "kube-api-access-j77rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.076263 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.103114 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j77rz\" (UniqueName: \"kubernetes.io/projected/2f511e91-7e5d-4a21-b39f-e56cc537612e-kube-api-access-j77rz\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.103150 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f511e91-7e5d-4a21-b39f-e56cc537612e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.298723 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-xffxt"] Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.308384 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-w8sx7"] Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.309670 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.317011 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-w8sx7"] Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.410345 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-config\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.410412 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.410436 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dch7q\" (UniqueName: \"kubernetes.io/projected/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-kube-api-access-dch7q\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.512333 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-config\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.512400 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.512428 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dch7q\" (UniqueName: \"kubernetes.io/projected/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-kube-api-access-dch7q\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.513778 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-config\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.513960 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.555264 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dch7q\" (UniqueName: \"kubernetes.io/projected/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-kube-api-access-dch7q\") pod \"dnsmasq-dns-5d79f765b5-w8sx7\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.558672 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-xffxt"] Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.633298 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.855362 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-ksccs"] Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.931540 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.932632 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" event={"ID":"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c","Type":"ContainerStarted","Data":"a78966e7754a28312d68997440b19f475d53d18c720ade30bdd93c0710ae59ba"} Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.932657 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" event={"ID":"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c","Type":"ContainerStarted","Data":"079907a15d7432a767701fbe8bd5223f212c1e0e058c4554a88c262b5b476db5"} Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.932745 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.935166 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-xc6km" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.936252 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.936401 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" event={"ID":"5fe9f5bf-29a4-4045-be1e-82dab38b2560","Type":"ContainerStarted","Data":"1929dc30301f1fad1ae284b01efdfa1a357f9a209431678a61a59ee47144680b"} Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.940339 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.940753 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.940926 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.941432 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.941572 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.942150 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-f7gmf" Jan 28 17:08:37 crc kubenswrapper[4903]: I0128 17:08:37.960988 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.039672 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-xc6km"] Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.048893 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-xc6km"] Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.091214 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-w8sx7"] Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.122714 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.122794 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.122857 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.122891 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.122936 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptc64\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-kube-api-access-ptc64\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.122970 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.123027 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.123089 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.123121 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.123265 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.123319 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.225398 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.225819 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.225878 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.225903 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.225937 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.225965 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.225990 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptc64\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-kube-api-access-ptc64\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.226021 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.226060 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.226098 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.226122 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.226647 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.226683 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.227145 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.227210 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.228105 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.228881 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.228916 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1401f4aa03b3a6aa45d828a1f682f335d8793ad57f5468fd46ec0a0c7cab6871/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.231921 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.237892 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.238325 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.238724 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.248980 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptc64\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-kube-api-access-ptc64\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.265626 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.288147 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.425966 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f511e91-7e5d-4a21-b39f-e56cc537612e" path="/var/lib/kubelet/pods/2f511e91-7e5d-4a21-b39f-e56cc537612e/volumes" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.494986 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.496150 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.498198 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.498428 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.498543 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.498433 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.498632 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.498697 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tshzg" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.498739 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.512060 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.631804 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.631863 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.631890 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.631916 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.631935 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-config-data\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.631967 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxpg\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-kube-api-access-4zxpg\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.632154 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.632245 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.632285 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.632425 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.632561 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733398 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733750 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733773 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733809 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733843 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733863 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733880 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733895 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733912 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733930 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-config-data\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.733946 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxpg\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-kube-api-access-4zxpg\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.735397 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.735502 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.735819 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.735838 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.736542 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-config-data\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.738796 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.738983 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.739123 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.740068 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.751319 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.752024 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxpg\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-kube-api-access-4zxpg\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.753701 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.753744 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fc2803564cd4572c17781a518f92c5cad76f1e3586297d676207076497b1b22b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: W0128 17:08:38.755344 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a04f428_2a31_4bc7_a1bc_a0830d6a3e8c.slice/crio-f03e6d9105b2850cfad30ecceb60385936527e89aa6e4cc9f24bf9f0058d84a2 WatchSource:0}: Error finding container f03e6d9105b2850cfad30ecceb60385936527e89aa6e4cc9f24bf9f0058d84a2: Status 404 returned error can't find the container with id f03e6d9105b2850cfad30ecceb60385936527e89aa6e4cc9f24bf9f0058d84a2 Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.783518 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.814787 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.958676 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c","Type":"ContainerStarted","Data":"f03e6d9105b2850cfad30ecceb60385936527e89aa6e4cc9f24bf9f0058d84a2"} Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.962365 4903 generic.go:334] "Generic (PLEG): container finished" podID="aecc7146-c7c1-4477-ba95-ffb5f9d38b0c" containerID="a78966e7754a28312d68997440b19f475d53d18c720ade30bdd93c0710ae59ba" exitCode=0 Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.962455 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" event={"ID":"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c","Type":"ContainerDied","Data":"a78966e7754a28312d68997440b19f475d53d18c720ade30bdd93c0710ae59ba"} Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.972421 4903 generic.go:334] "Generic (PLEG): container finished" podID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerID="7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465" exitCode=0 Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.972511 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" event={"ID":"5fe9f5bf-29a4-4045-be1e-82dab38b2560","Type":"ContainerDied","Data":"7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465"} Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.974288 4903 generic.go:334] "Generic (PLEG): container finished" podID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerID="0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538" exitCode=0 Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.974334 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" event={"ID":"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0","Type":"ContainerDied","Data":"0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538"} Jan 28 17:08:38 crc kubenswrapper[4903]: I0128 17:08:38.974362 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" event={"ID":"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0","Type":"ContainerStarted","Data":"4e9646a7d9995e558ab0b3f6835e5295b20b777d4b939caf6e0b0c5114aa80b4"} Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.143042 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.144815 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.148580 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.148934 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.149766 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.151071 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vxcpb" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.153874 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.167062 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.246565 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8jp\" (UniqueName: \"kubernetes.io/projected/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-kube-api-access-rw8jp\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.246671 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.246789 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-config-data-generated\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.246821 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.246998 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.247172 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-operator-scripts\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.247275 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-config-data-default\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.247341 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-kolla-config\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: E0128 17:08:39.251440 4903 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 28 17:08:39 crc kubenswrapper[4903]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5fe9f5bf-29a4-4045-be1e-82dab38b2560/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 28 17:08:39 crc kubenswrapper[4903]: > podSandboxID="1929dc30301f1fad1ae284b01efdfa1a357f9a209431678a61a59ee47144680b" Jan 28 17:08:39 crc kubenswrapper[4903]: E0128 17:08:39.251655 4903 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 17:08:39 crc kubenswrapper[4903]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb6hc5h68h68h594h659hdbh679h65ch5f6hdch6h5b9h8fh55hfhf8h57fhc7h56ch687h669h559h678h5dhc7hf7h697h5d6h9ch669h54fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wb84m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-865d9b578f-ksccs_openstack(5fe9f5bf-29a4-4045-be1e-82dab38b2560): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5fe9f5bf-29a4-4045-be1e-82dab38b2560/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 28 17:08:39 crc kubenswrapper[4903]: > logger="UnhandledError" Jan 28 17:08:39 crc kubenswrapper[4903]: E0128 17:08:39.253804 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5fe9f5bf-29a4-4045-be1e-82dab38b2560/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.256625 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:08:39 crc kubenswrapper[4903]: W0128 17:08:39.311729 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9b69ca6_bdea_4c56_8c4a_66d030cf7917.slice/crio-884ede21c13d281223ed5e88a7cefb5208733775117d755e9cb2f3d52e56df16 WatchSource:0}: Error finding container 884ede21c13d281223ed5e88a7cefb5208733775117d755e9cb2f3d52e56df16: Status 404 returned error can't find the container with id 884ede21c13d281223ed5e88a7cefb5208733775117d755e9cb2f3d52e56df16 Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.315653 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.348890 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.348990 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-operator-scripts\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.349037 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-config-data-default\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.349061 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-kolla-config\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.349098 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw8jp\" (UniqueName: \"kubernetes.io/projected/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-kube-api-access-rw8jp\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.349127 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.349174 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-config-data-generated\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.349223 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.350077 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-config-data-generated\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.351774 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-config-data-default\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.351975 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-kolla-config\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.352048 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-operator-scripts\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.353785 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.354389 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.356872 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.356910 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bd467384120f2b6a3c53a17d38f3b88b838bd9ad7185e49ca6cae9da2787b2c/globalmount\"" pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.368346 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw8jp\" (UniqueName: \"kubernetes.io/projected/85d3f93c-ec10-4406-9c34-3d5a97ec1c78-kube-api-access-rw8jp\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.422992 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05ca660c-1173-4fb0-ac90-7cb21c785d9f\") pod \"openstack-galera-0\" (UID: \"85d3f93c-ec10-4406-9c34-3d5a97ec1c78\") " pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.451243 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-config\") pod \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.451389 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57pk5\" (UniqueName: \"kubernetes.io/projected/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-kube-api-access-57pk5\") pod \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\" (UID: \"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c\") " Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.455558 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-kube-api-access-57pk5" (OuterVolumeSpecName: "kube-api-access-57pk5") pod "aecc7146-c7c1-4477-ba95-ffb5f9d38b0c" (UID: "aecc7146-c7c1-4477-ba95-ffb5f9d38b0c"). InnerVolumeSpecName "kube-api-access-57pk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.483169 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-config" (OuterVolumeSpecName: "config") pod "aecc7146-c7c1-4477-ba95-ffb5f9d38b0c" (UID: "aecc7146-c7c1-4477-ba95-ffb5f9d38b0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.498328 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.555590 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.555893 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57pk5\" (UniqueName: \"kubernetes.io/projected/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c-kube-api-access-57pk5\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.985138 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" event={"ID":"aecc7146-c7c1-4477-ba95-ffb5f9d38b0c","Type":"ContainerDied","Data":"079907a15d7432a767701fbe8bd5223f212c1e0e058c4554a88c262b5b476db5"} Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.985160 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-xffxt" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.985189 4903 scope.go:117] "RemoveContainer" containerID="a78966e7754a28312d68997440b19f475d53d18c720ade30bdd93c0710ae59ba" Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.992070 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9b69ca6-bdea-4c56-8c4a-66d030cf7917","Type":"ContainerStarted","Data":"884ede21c13d281223ed5e88a7cefb5208733775117d755e9cb2f3d52e56df16"} Jan 28 17:08:39 crc kubenswrapper[4903]: I0128 17:08:39.998857 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" event={"ID":"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0","Type":"ContainerStarted","Data":"8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda"} Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.020885 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.038754 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" podStartSLOduration=3.038732332 podStartE2EDuration="3.038732332s" podCreationTimestamp="2026-01-28 17:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:08:40.037849878 +0000 UTC m=+4992.313821389" watchObservedRunningTime="2026-01-28 17:08:40.038732332 +0000 UTC m=+4992.314703843" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.103889 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-xffxt"] Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.112985 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-xffxt"] Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.439743 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecc7146-c7c1-4477-ba95-ffb5f9d38b0c" path="/var/lib/kubelet/pods/aecc7146-c7c1-4477-ba95-ffb5f9d38b0c/volumes" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.756841 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 17:08:40 crc kubenswrapper[4903]: E0128 17:08:40.757491 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecc7146-c7c1-4477-ba95-ffb5f9d38b0c" containerName="init" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.757514 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecc7146-c7c1-4477-ba95-ffb5f9d38b0c" containerName="init" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.757705 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecc7146-c7c1-4477-ba95-ffb5f9d38b0c" containerName="init" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.758610 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.762040 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.762040 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.762140 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.762468 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-28lpb" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.773744 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.874590 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.874646 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31642982-51f2-4e0f-b54f-3b0cf5b508a5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.874683 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.874753 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.874840 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31642982-51f2-4e0f-b54f-3b0cf5b508a5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.874930 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmphg\" (UniqueName: \"kubernetes.io/projected/31642982-51f2-4e0f-b54f-3b0cf5b508a5-kube-api-access-dmphg\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.874968 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.875002 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31642982-51f2-4e0f-b54f-3b0cf5b508a5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977128 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977242 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31642982-51f2-4e0f-b54f-3b0cf5b508a5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977304 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977340 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31642982-51f2-4e0f-b54f-3b0cf5b508a5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977375 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977446 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977494 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31642982-51f2-4e0f-b54f-3b0cf5b508a5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.977577 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmphg\" (UniqueName: \"kubernetes.io/projected/31642982-51f2-4e0f-b54f-3b0cf5b508a5-kube-api-access-dmphg\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.979776 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/31642982-51f2-4e0f-b54f-3b0cf5b508a5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.980463 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.981592 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.982364 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/31642982-51f2-4e0f-b54f-3b0cf5b508a5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.984273 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31642982-51f2-4e0f-b54f-3b0cf5b508a5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.984603 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.984650 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1afeba2c3203fadd8f82f28c31ed96d31655c990bfe6fbab6351593f3d016b0c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.988101 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/31642982-51f2-4e0f-b54f-3b0cf5b508a5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:40 crc kubenswrapper[4903]: I0128 17:08:40.997223 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmphg\" (UniqueName: \"kubernetes.io/projected/31642982-51f2-4e0f-b54f-3b0cf5b508a5-kube-api-access-dmphg\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.010202 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85d3f93c-ec10-4406-9c34-3d5a97ec1c78","Type":"ContainerStarted","Data":"61f471cbf7b677c181480907c91d9111886bcaeb234195748c2f9b7f40e07f5f"} Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.010248 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85d3f93c-ec10-4406-9c34-3d5a97ec1c78","Type":"ContainerStarted","Data":"09f96757442903d1aa131d2b3c99ebc36edeac9a35794a5c49c4a2625e79aa8b"} Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.013204 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" event={"ID":"5fe9f5bf-29a4-4045-be1e-82dab38b2560","Type":"ContainerStarted","Data":"0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32"} Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.013622 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.016839 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9b69ca6-bdea-4c56-8c4a-66d030cf7917","Type":"ContainerStarted","Data":"243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce"} Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.018275 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9d90225f-6dac-464a-ab83-e25d263f34c6\") pod \"openstack-cell1-galera-0\" (UID: \"31642982-51f2-4e0f-b54f-3b0cf5b508a5\") " pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.019423 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c","Type":"ContainerStarted","Data":"79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56"} Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.020173 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.079856 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.104474 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" podStartSLOduration=5.104444991 podStartE2EDuration="5.104444991s" podCreationTimestamp="2026-01-28 17:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:08:41.096075514 +0000 UTC m=+4993.372047025" watchObservedRunningTime="2026-01-28 17:08:41.104444991 +0000 UTC m=+4993.380416502" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.140411 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.141773 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.145575 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.145812 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.146131 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6ssj7" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.182645 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e07b6ec6-43d8-4509-8d64-3b07663d45df-config-data\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.182700 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b6ec6-43d8-4509-8d64-3b07663d45df-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.182754 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07b6ec6-43d8-4509-8d64-3b07663d45df-kolla-config\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.182771 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vvf\" (UniqueName: \"kubernetes.io/projected/e07b6ec6-43d8-4509-8d64-3b07663d45df-kube-api-access-h2vvf\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.182795 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b6ec6-43d8-4509-8d64-3b07663d45df-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.197081 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.284239 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b6ec6-43d8-4509-8d64-3b07663d45df-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.284672 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07b6ec6-43d8-4509-8d64-3b07663d45df-kolla-config\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.284715 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vvf\" (UniqueName: \"kubernetes.io/projected/e07b6ec6-43d8-4509-8d64-3b07663d45df-kube-api-access-h2vvf\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.284754 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b6ec6-43d8-4509-8d64-3b07663d45df-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.285041 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e07b6ec6-43d8-4509-8d64-3b07663d45df-config-data\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.285899 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e07b6ec6-43d8-4509-8d64-3b07663d45df-config-data\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.286141 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07b6ec6-43d8-4509-8d64-3b07663d45df-kolla-config\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.290632 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b6ec6-43d8-4509-8d64-3b07663d45df-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.305371 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vvf\" (UniqueName: \"kubernetes.io/projected/e07b6ec6-43d8-4509-8d64-3b07663d45df-kube-api-access-h2vvf\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.306133 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b6ec6-43d8-4509-8d64-3b07663d45df-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e07b6ec6-43d8-4509-8d64-3b07663d45df\") " pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.488480 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.563944 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 17:08:41 crc kubenswrapper[4903]: W0128 17:08:41.568918 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31642982_51f2_4e0f_b54f_3b0cf5b508a5.slice/crio-ed41739d42f1527eccf3b8a8529db89fce649730a72da8948cc105c3cdcc9680 WatchSource:0}: Error finding container ed41739d42f1527eccf3b8a8529db89fce649730a72da8948cc105c3cdcc9680: Status 404 returned error can't find the container with id ed41739d42f1527eccf3b8a8529db89fce649730a72da8948cc105c3cdcc9680 Jan 28 17:08:41 crc kubenswrapper[4903]: I0128 17:08:41.726336 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 17:08:42 crc kubenswrapper[4903]: I0128 17:08:42.030746 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31642982-51f2-4e0f-b54f-3b0cf5b508a5","Type":"ContainerStarted","Data":"9e6134f8b0701d500edc23c62e44dee65d8f1cd23f7c6ece339c2707bb736830"} Jan 28 17:08:42 crc kubenswrapper[4903]: I0128 17:08:42.030792 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31642982-51f2-4e0f-b54f-3b0cf5b508a5","Type":"ContainerStarted","Data":"ed41739d42f1527eccf3b8a8529db89fce649730a72da8948cc105c3cdcc9680"} Jan 28 17:08:42 crc kubenswrapper[4903]: I0128 17:08:42.033142 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e07b6ec6-43d8-4509-8d64-3b07663d45df","Type":"ContainerStarted","Data":"d1b8c008bd59f06192e0716cefbb34f06c0061bee2947e572c8ca4cf49b0fd4e"} Jan 28 17:08:43 crc kubenswrapper[4903]: I0128 17:08:43.042311 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e07b6ec6-43d8-4509-8d64-3b07663d45df","Type":"ContainerStarted","Data":"35d99a4c27eb7081ee0219d8eeae9cc84bbca978be215017370fa8e3fd235693"} Jan 28 17:08:43 crc kubenswrapper[4903]: I0128 17:08:43.060055 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.06003111 podStartE2EDuration="2.06003111s" podCreationTimestamp="2026-01-28 17:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:08:43.05599892 +0000 UTC m=+4995.331970441" watchObservedRunningTime="2026-01-28 17:08:43.06003111 +0000 UTC m=+4995.336002631" Jan 28 17:08:44 crc kubenswrapper[4903]: I0128 17:08:44.052308 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 17:08:47 crc kubenswrapper[4903]: I0128 17:08:47.078465 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:47 crc kubenswrapper[4903]: I0128 17:08:47.636911 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:08:47 crc kubenswrapper[4903]: I0128 17:08:47.698091 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-ksccs"] Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.082308 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerName="dnsmasq-dns" containerID="cri-o://0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32" gracePeriod=10 Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.751256 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.817652 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-dns-svc\") pod \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.817733 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-config\") pod \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.817870 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb84m\" (UniqueName: \"kubernetes.io/projected/5fe9f5bf-29a4-4045-be1e-82dab38b2560-kube-api-access-wb84m\") pod \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\" (UID: \"5fe9f5bf-29a4-4045-be1e-82dab38b2560\") " Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.823653 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe9f5bf-29a4-4045-be1e-82dab38b2560-kube-api-access-wb84m" (OuterVolumeSpecName: "kube-api-access-wb84m") pod "5fe9f5bf-29a4-4045-be1e-82dab38b2560" (UID: "5fe9f5bf-29a4-4045-be1e-82dab38b2560"). InnerVolumeSpecName "kube-api-access-wb84m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.860841 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-config" (OuterVolumeSpecName: "config") pod "5fe9f5bf-29a4-4045-be1e-82dab38b2560" (UID: "5fe9f5bf-29a4-4045-be1e-82dab38b2560"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.862264 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fe9f5bf-29a4-4045-be1e-82dab38b2560" (UID: "5fe9f5bf-29a4-4045-be1e-82dab38b2560"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.919281 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.919327 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe9f5bf-29a4-4045-be1e-82dab38b2560-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:48 crc kubenswrapper[4903]: I0128 17:08:48.919343 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb84m\" (UniqueName: \"kubernetes.io/projected/5fe9f5bf-29a4-4045-be1e-82dab38b2560-kube-api-access-wb84m\") on node \"crc\" DevicePath \"\"" Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.091852 4903 generic.go:334] "Generic (PLEG): container finished" podID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerID="0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32" exitCode=0 Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.091912 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" event={"ID":"5fe9f5bf-29a4-4045-be1e-82dab38b2560","Type":"ContainerDied","Data":"0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32"} Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.091984 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" event={"ID":"5fe9f5bf-29a4-4045-be1e-82dab38b2560","Type":"ContainerDied","Data":"1929dc30301f1fad1ae284b01efdfa1a357f9a209431678a61a59ee47144680b"} Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.092009 4903 scope.go:117] "RemoveContainer" containerID="0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32" Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.092014 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865d9b578f-ksccs" Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.115290 4903 scope.go:117] "RemoveContainer" containerID="7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465" Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.134625 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-ksccs"] Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.144903 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865d9b578f-ksccs"] Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.151859 4903 scope.go:117] "RemoveContainer" containerID="0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32" Jan 28 17:08:49 crc kubenswrapper[4903]: E0128 17:08:49.152663 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32\": container with ID starting with 0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32 not found: ID does not exist" containerID="0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32" Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.152718 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32"} err="failed to get container status \"0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32\": rpc error: code = NotFound desc = could not find container \"0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32\": container with ID starting with 0caa13c8d59c9fa52e2bb584d04f5240c5e114bcd591de4e704a1fafece5ea32 not found: ID does not exist" Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.152743 4903 scope.go:117] "RemoveContainer" containerID="7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465" Jan 28 17:08:49 crc kubenswrapper[4903]: E0128 17:08:49.153458 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465\": container with ID starting with 7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465 not found: ID does not exist" containerID="7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465" Jan 28 17:08:49 crc kubenswrapper[4903]: I0128 17:08:49.153507 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465"} err="failed to get container status \"7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465\": rpc error: code = NotFound desc = could not find container \"7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465\": container with ID starting with 7249f91dfa5262c6cae71f1de8cf36250eded5cabcef6f2229868afb4abd7465 not found: ID does not exist" Jan 28 17:08:50 crc kubenswrapper[4903]: I0128 17:08:50.103734 4903 generic.go:334] "Generic (PLEG): container finished" podID="31642982-51f2-4e0f-b54f-3b0cf5b508a5" containerID="9e6134f8b0701d500edc23c62e44dee65d8f1cd23f7c6ece339c2707bb736830" exitCode=0 Jan 28 17:08:50 crc kubenswrapper[4903]: I0128 17:08:50.103829 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31642982-51f2-4e0f-b54f-3b0cf5b508a5","Type":"ContainerDied","Data":"9e6134f8b0701d500edc23c62e44dee65d8f1cd23f7c6ece339c2707bb736830"} Jan 28 17:08:50 crc kubenswrapper[4903]: I0128 17:08:50.106135 4903 generic.go:334] "Generic (PLEG): container finished" podID="85d3f93c-ec10-4406-9c34-3d5a97ec1c78" containerID="61f471cbf7b677c181480907c91d9111886bcaeb234195748c2f9b7f40e07f5f" exitCode=0 Jan 28 17:08:50 crc kubenswrapper[4903]: I0128 17:08:50.106225 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85d3f93c-ec10-4406-9c34-3d5a97ec1c78","Type":"ContainerDied","Data":"61f471cbf7b677c181480907c91d9111886bcaeb234195748c2f9b7f40e07f5f"} Jan 28 17:08:50 crc kubenswrapper[4903]: I0128 17:08:50.426510 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" path="/var/lib/kubelet/pods/5fe9f5bf-29a4-4045-be1e-82dab38b2560/volumes" Jan 28 17:08:51 crc kubenswrapper[4903]: I0128 17:08:51.116693 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"85d3f93c-ec10-4406-9c34-3d5a97ec1c78","Type":"ContainerStarted","Data":"e4b4fb8b0f176893ade64ae0668ef85845c45ff754366bf49c1615a6bf49c099"} Jan 28 17:08:51 crc kubenswrapper[4903]: I0128 17:08:51.120053 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"31642982-51f2-4e0f-b54f-3b0cf5b508a5","Type":"ContainerStarted","Data":"b18e65d35bd7737d22217f0a4f07ca437ba7aadef201a49f295eb1d72e9df6b4"} Jan 28 17:08:51 crc kubenswrapper[4903]: I0128 17:08:51.151236 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.151216917 podStartE2EDuration="13.151216917s" podCreationTimestamp="2026-01-28 17:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:08:51.145334427 +0000 UTC m=+5003.421305958" watchObservedRunningTime="2026-01-28 17:08:51.151216917 +0000 UTC m=+5003.427188428" Jan 28 17:08:51 crc kubenswrapper[4903]: I0128 17:08:51.187319 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.18729974 podStartE2EDuration="12.18729974s" podCreationTimestamp="2026-01-28 17:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:08:51.183706102 +0000 UTC m=+5003.459677613" watchObservedRunningTime="2026-01-28 17:08:51.18729974 +0000 UTC m=+5003.463271251" Jan 28 17:08:51 crc kubenswrapper[4903]: I0128 17:08:51.490467 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 17:08:56 crc kubenswrapper[4903]: I0128 17:08:56.613481 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:08:56 crc kubenswrapper[4903]: I0128 17:08:56.614081 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:08:56 crc kubenswrapper[4903]: I0128 17:08:56.614132 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:08:56 crc kubenswrapper[4903]: I0128 17:08:56.614799 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:08:56 crc kubenswrapper[4903]: I0128 17:08:56.614855 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" gracePeriod=600 Jan 28 17:08:56 crc kubenswrapper[4903]: E0128 17:08:56.739704 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:08:57 crc kubenswrapper[4903]: I0128 17:08:57.173257 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" exitCode=0 Jan 28 17:08:57 crc kubenswrapper[4903]: I0128 17:08:57.173349 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7"} Jan 28 17:08:57 crc kubenswrapper[4903]: I0128 17:08:57.173997 4903 scope.go:117] "RemoveContainer" containerID="87886688b115e3bd33272efeb9c1a2fd3a83c01034c6fff2375b5803ae4625f1" Jan 28 17:08:57 crc kubenswrapper[4903]: I0128 17:08:57.174583 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:08:57 crc kubenswrapper[4903]: E0128 17:08:57.174861 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:08:59 crc kubenswrapper[4903]: I0128 17:08:59.498593 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 17:08:59 crc kubenswrapper[4903]: I0128 17:08:59.498676 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 17:08:59 crc kubenswrapper[4903]: I0128 17:08:59.666942 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 17:09:00 crc kubenswrapper[4903]: I0128 17:09:00.269460 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 17:09:01 crc kubenswrapper[4903]: I0128 17:09:01.080144 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 17:09:01 crc kubenswrapper[4903]: I0128 17:09:01.080212 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 17:09:01 crc kubenswrapper[4903]: I0128 17:09:01.150855 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 17:09:01 crc kubenswrapper[4903]: I0128 17:09:01.266777 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.072273 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hkddh"] Jan 28 17:09:08 crc kubenswrapper[4903]: E0128 17:09:08.073133 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerName="dnsmasq-dns" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.073147 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerName="dnsmasq-dns" Jan 28 17:09:08 crc kubenswrapper[4903]: E0128 17:09:08.073156 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerName="init" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.073162 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerName="init" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.073291 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe9f5bf-29a4-4045-be1e-82dab38b2560" containerName="dnsmasq-dns" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.074121 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.078488 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.084087 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hkddh"] Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.151062 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-operator-scripts\") pod \"root-account-create-update-hkddh\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.151128 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px8lr\" (UniqueName: \"kubernetes.io/projected/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-kube-api-access-px8lr\") pod \"root-account-create-update-hkddh\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.252362 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-operator-scripts\") pod \"root-account-create-update-hkddh\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.252422 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px8lr\" (UniqueName: \"kubernetes.io/projected/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-kube-api-access-px8lr\") pod \"root-account-create-update-hkddh\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.253388 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-operator-scripts\") pod \"root-account-create-update-hkddh\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.282330 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px8lr\" (UniqueName: \"kubernetes.io/projected/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-kube-api-access-px8lr\") pod \"root-account-create-update-hkddh\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.397301 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.419296 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:09:08 crc kubenswrapper[4903]: E0128 17:09:08.419597 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:09:08 crc kubenswrapper[4903]: I0128 17:09:08.822851 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hkddh"] Jan 28 17:09:09 crc kubenswrapper[4903]: I0128 17:09:09.259748 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hkddh" event={"ID":"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b","Type":"ContainerStarted","Data":"3612c95b5cae04b8a083cca14ad662a3d5d412ba6e76178fb8ea385e9dfaab00"} Jan 28 17:09:09 crc kubenswrapper[4903]: I0128 17:09:09.261104 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hkddh" event={"ID":"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b","Type":"ContainerStarted","Data":"3d00470da43a82dbc49645fcdfa6f0512fb49fc28678d8f6e6615e70db66e7ae"} Jan 28 17:09:09 crc kubenswrapper[4903]: I0128 17:09:09.275829 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-hkddh" podStartSLOduration=1.275806716 podStartE2EDuration="1.275806716s" podCreationTimestamp="2026-01-28 17:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:09:09.272765193 +0000 UTC m=+5021.548736704" watchObservedRunningTime="2026-01-28 17:09:09.275806716 +0000 UTC m=+5021.551778227" Jan 28 17:09:10 crc kubenswrapper[4903]: I0128 17:09:10.269628 4903 generic.go:334] "Generic (PLEG): container finished" podID="fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b" containerID="3612c95b5cae04b8a083cca14ad662a3d5d412ba6e76178fb8ea385e9dfaab00" exitCode=0 Jan 28 17:09:10 crc kubenswrapper[4903]: I0128 17:09:10.269696 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hkddh" event={"ID":"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b","Type":"ContainerDied","Data":"3612c95b5cae04b8a083cca14ad662a3d5d412ba6e76178fb8ea385e9dfaab00"} Jan 28 17:09:11 crc kubenswrapper[4903]: I0128 17:09:11.566086 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:11 crc kubenswrapper[4903]: I0128 17:09:11.603617 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px8lr\" (UniqueName: \"kubernetes.io/projected/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-kube-api-access-px8lr\") pod \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " Jan 28 17:09:11 crc kubenswrapper[4903]: I0128 17:09:11.603674 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-operator-scripts\") pod \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\" (UID: \"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b\") " Jan 28 17:09:11 crc kubenswrapper[4903]: I0128 17:09:11.604425 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b" (UID: "fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:11 crc kubenswrapper[4903]: I0128 17:09:11.609582 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-kube-api-access-px8lr" (OuterVolumeSpecName: "kube-api-access-px8lr") pod "fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b" (UID: "fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b"). InnerVolumeSpecName "kube-api-access-px8lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:11 crc kubenswrapper[4903]: I0128 17:09:11.706105 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px8lr\" (UniqueName: \"kubernetes.io/projected/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-kube-api-access-px8lr\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:11 crc kubenswrapper[4903]: I0128 17:09:11.706136 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:12 crc kubenswrapper[4903]: I0128 17:09:12.285658 4903 generic.go:334] "Generic (PLEG): container finished" podID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerID="79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56" exitCode=0 Jan 28 17:09:12 crc kubenswrapper[4903]: I0128 17:09:12.285975 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c","Type":"ContainerDied","Data":"79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56"} Jan 28 17:09:12 crc kubenswrapper[4903]: I0128 17:09:12.289179 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hkddh" event={"ID":"fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b","Type":"ContainerDied","Data":"3d00470da43a82dbc49645fcdfa6f0512fb49fc28678d8f6e6615e70db66e7ae"} Jan 28 17:09:12 crc kubenswrapper[4903]: I0128 17:09:12.289220 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d00470da43a82dbc49645fcdfa6f0512fb49fc28678d8f6e6615e70db66e7ae" Jan 28 17:09:12 crc kubenswrapper[4903]: I0128 17:09:12.289295 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hkddh" Jan 28 17:09:13 crc kubenswrapper[4903]: I0128 17:09:13.298416 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c","Type":"ContainerStarted","Data":"9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287"} Jan 28 17:09:13 crc kubenswrapper[4903]: I0128 17:09:13.298981 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:13 crc kubenswrapper[4903]: I0128 17:09:13.301004 4903 generic.go:334] "Generic (PLEG): container finished" podID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerID="243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce" exitCode=0 Jan 28 17:09:13 crc kubenswrapper[4903]: I0128 17:09:13.301052 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9b69ca6-bdea-4c56-8c4a-66d030cf7917","Type":"ContainerDied","Data":"243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce"} Jan 28 17:09:13 crc kubenswrapper[4903]: I0128 17:09:13.330315 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.330292838 podStartE2EDuration="37.330292838s" podCreationTimestamp="2026-01-28 17:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:09:13.323786401 +0000 UTC m=+5025.599757912" watchObservedRunningTime="2026-01-28 17:09:13.330292838 +0000 UTC m=+5025.606264349" Jan 28 17:09:14 crc kubenswrapper[4903]: I0128 17:09:14.310008 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9b69ca6-bdea-4c56-8c4a-66d030cf7917","Type":"ContainerStarted","Data":"8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212"} Jan 28 17:09:14 crc kubenswrapper[4903]: I0128 17:09:14.310512 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 17:09:14 crc kubenswrapper[4903]: I0128 17:09:14.336244 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.336221239 podStartE2EDuration="37.336221239s" podCreationTimestamp="2026-01-28 17:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:09:14.333901275 +0000 UTC m=+5026.609872796" watchObservedRunningTime="2026-01-28 17:09:14.336221239 +0000 UTC m=+5026.612192750" Jan 28 17:09:14 crc kubenswrapper[4903]: I0128 17:09:14.720826 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hkddh"] Jan 28 17:09:14 crc kubenswrapper[4903]: I0128 17:09:14.728160 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hkddh"] Jan 28 17:09:16 crc kubenswrapper[4903]: I0128 17:09:16.423117 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b" path="/var/lib/kubelet/pods/fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b/volumes" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.736388 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k7dg8"] Jan 28 17:09:19 crc kubenswrapper[4903]: E0128 17:09:19.737021 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b" containerName="mariadb-account-create-update" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.737041 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b" containerName="mariadb-account-create-update" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.737222 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee7d81a-5261-4ff1-8cc7-c3c65fe65d5b" containerName="mariadb-account-create-update" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.737819 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.743860 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.746000 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k7dg8"] Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.830226 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b4159c-7539-40b4-9e70-4b3bf1b079df-operator-scripts\") pod \"root-account-create-update-k7dg8\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.830365 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxlr\" (UniqueName: \"kubernetes.io/projected/97b4159c-7539-40b4-9e70-4b3bf1b079df-kube-api-access-bhxlr\") pod \"root-account-create-update-k7dg8\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.931602 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b4159c-7539-40b4-9e70-4b3bf1b079df-operator-scripts\") pod \"root-account-create-update-k7dg8\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.931699 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxlr\" (UniqueName: \"kubernetes.io/projected/97b4159c-7539-40b4-9e70-4b3bf1b079df-kube-api-access-bhxlr\") pod \"root-account-create-update-k7dg8\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.932553 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b4159c-7539-40b4-9e70-4b3bf1b079df-operator-scripts\") pod \"root-account-create-update-k7dg8\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:19 crc kubenswrapper[4903]: I0128 17:09:19.951239 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxlr\" (UniqueName: \"kubernetes.io/projected/97b4159c-7539-40b4-9e70-4b3bf1b079df-kube-api-access-bhxlr\") pod \"root-account-create-update-k7dg8\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:20 crc kubenswrapper[4903]: I0128 17:09:20.055785 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:20 crc kubenswrapper[4903]: I0128 17:09:20.498746 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k7dg8"] Jan 28 17:09:21 crc kubenswrapper[4903]: I0128 17:09:21.362811 4903 generic.go:334] "Generic (PLEG): container finished" podID="97b4159c-7539-40b4-9e70-4b3bf1b079df" containerID="89a3ea293d417a46807ee93d90b0cc278be8ba2eb0e87729c6e816b00ae566b5" exitCode=0 Jan 28 17:09:21 crc kubenswrapper[4903]: I0128 17:09:21.362889 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k7dg8" event={"ID":"97b4159c-7539-40b4-9e70-4b3bf1b079df","Type":"ContainerDied","Data":"89a3ea293d417a46807ee93d90b0cc278be8ba2eb0e87729c6e816b00ae566b5"} Jan 28 17:09:21 crc kubenswrapper[4903]: I0128 17:09:21.363163 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k7dg8" event={"ID":"97b4159c-7539-40b4-9e70-4b3bf1b079df","Type":"ContainerStarted","Data":"0531801832a2817be2d7fa13b22cd87f630187079088f8882d636c8048af5d70"} Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.415118 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:09:22 crc kubenswrapper[4903]: E0128 17:09:22.415471 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.693501 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.774225 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxlr\" (UniqueName: \"kubernetes.io/projected/97b4159c-7539-40b4-9e70-4b3bf1b079df-kube-api-access-bhxlr\") pod \"97b4159c-7539-40b4-9e70-4b3bf1b079df\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.774387 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b4159c-7539-40b4-9e70-4b3bf1b079df-operator-scripts\") pod \"97b4159c-7539-40b4-9e70-4b3bf1b079df\" (UID: \"97b4159c-7539-40b4-9e70-4b3bf1b079df\") " Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.775497 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b4159c-7539-40b4-9e70-4b3bf1b079df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97b4159c-7539-40b4-9e70-4b3bf1b079df" (UID: "97b4159c-7539-40b4-9e70-4b3bf1b079df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.781505 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b4159c-7539-40b4-9e70-4b3bf1b079df-kube-api-access-bhxlr" (OuterVolumeSpecName: "kube-api-access-bhxlr") pod "97b4159c-7539-40b4-9e70-4b3bf1b079df" (UID: "97b4159c-7539-40b4-9e70-4b3bf1b079df"). InnerVolumeSpecName "kube-api-access-bhxlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.876342 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97b4159c-7539-40b4-9e70-4b3bf1b079df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:22 crc kubenswrapper[4903]: I0128 17:09:22.876382 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxlr\" (UniqueName: \"kubernetes.io/projected/97b4159c-7539-40b4-9e70-4b3bf1b079df-kube-api-access-bhxlr\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:23 crc kubenswrapper[4903]: I0128 17:09:23.376280 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k7dg8" event={"ID":"97b4159c-7539-40b4-9e70-4b3bf1b079df","Type":"ContainerDied","Data":"0531801832a2817be2d7fa13b22cd87f630187079088f8882d636c8048af5d70"} Jan 28 17:09:23 crc kubenswrapper[4903]: I0128 17:09:23.376320 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0531801832a2817be2d7fa13b22cd87f630187079088f8882d636c8048af5d70" Jan 28 17:09:23 crc kubenswrapper[4903]: I0128 17:09:23.376365 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k7dg8" Jan 28 17:09:28 crc kubenswrapper[4903]: I0128 17:09:28.291759 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:28 crc kubenswrapper[4903]: I0128 17:09:28.818763 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 17:09:33 crc kubenswrapper[4903]: I0128 17:09:33.413556 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:09:33 crc kubenswrapper[4903]: E0128 17:09:33.414750 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.347746 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-2zxdz"] Jan 28 17:09:37 crc kubenswrapper[4903]: E0128 17:09:37.348448 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b4159c-7539-40b4-9e70-4b3bf1b079df" containerName="mariadb-account-create-update" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.348464 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b4159c-7539-40b4-9e70-4b3bf1b079df" containerName="mariadb-account-create-update" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.348630 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b4159c-7539-40b4-9e70-4b3bf1b079df" containerName="mariadb-account-create-update" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.349622 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.360776 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-2zxdz"] Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.490963 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-config\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.491061 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9cmg\" (UniqueName: \"kubernetes.io/projected/d0b61dea-09c9-4364-9eaf-bf0e94729d30-kube-api-access-l9cmg\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.491107 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-dns-svc\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.592764 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9cmg\" (UniqueName: \"kubernetes.io/projected/d0b61dea-09c9-4364-9eaf-bf0e94729d30-kube-api-access-l9cmg\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.592852 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-dns-svc\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.594425 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-dns-svc\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.594700 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-config\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.595812 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-config\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.612251 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9cmg\" (UniqueName: \"kubernetes.io/projected/d0b61dea-09c9-4364-9eaf-bf0e94729d30-kube-api-access-l9cmg\") pod \"dnsmasq-dns-699964fbc-2zxdz\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.668687 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:37 crc kubenswrapper[4903]: I0128 17:09:37.884589 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:09:38 crc kubenswrapper[4903]: I0128 17:09:38.113111 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-2zxdz"] Jan 28 17:09:38 crc kubenswrapper[4903]: I0128 17:09:38.487864 4903 generic.go:334] "Generic (PLEG): container finished" podID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerID="feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128" exitCode=0 Jan 28 17:09:38 crc kubenswrapper[4903]: I0128 17:09:38.488037 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" event={"ID":"d0b61dea-09c9-4364-9eaf-bf0e94729d30","Type":"ContainerDied","Data":"feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128"} Jan 28 17:09:38 crc kubenswrapper[4903]: I0128 17:09:38.488181 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" event={"ID":"d0b61dea-09c9-4364-9eaf-bf0e94729d30","Type":"ContainerStarted","Data":"5f7dd91edfa0fe38439234ccb4c165bfa9631315c62df2946ca2dd684fb1913b"} Jan 28 17:09:38 crc kubenswrapper[4903]: I0128 17:09:38.774493 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:09:39 crc kubenswrapper[4903]: I0128 17:09:39.497335 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" event={"ID":"d0b61dea-09c9-4364-9eaf-bf0e94729d30","Type":"ContainerStarted","Data":"68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc"} Jan 28 17:09:39 crc kubenswrapper[4903]: I0128 17:09:39.501743 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:39 crc kubenswrapper[4903]: I0128 17:09:39.525658 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" podStartSLOduration=2.525635456 podStartE2EDuration="2.525635456s" podCreationTimestamp="2026-01-28 17:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:09:39.522059918 +0000 UTC m=+5051.798031439" watchObservedRunningTime="2026-01-28 17:09:39.525635456 +0000 UTC m=+5051.801606967" Jan 28 17:09:41 crc kubenswrapper[4903]: I0128 17:09:41.761680 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerName="rabbitmq" containerID="cri-o://8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212" gracePeriod=604797 Jan 28 17:09:42 crc kubenswrapper[4903]: I0128 17:09:42.761987 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerName="rabbitmq" containerID="cri-o://9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287" gracePeriod=604797 Jan 28 17:09:44 crc kubenswrapper[4903]: I0128 17:09:44.414668 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:09:44 crc kubenswrapper[4903]: E0128 17:09:44.414979 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:09:47 crc kubenswrapper[4903]: I0128 17:09:47.670409 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:09:47 crc kubenswrapper[4903]: I0128 17:09:47.715079 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-w8sx7"] Jan 28 17:09:47 crc kubenswrapper[4903]: I0128 17:09:47.715339 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" podUID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerName="dnsmasq-dns" containerID="cri-o://8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda" gracePeriod=10 Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.139651 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.264977 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-dns-svc\") pod \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.265089 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-config\") pod \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.265190 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dch7q\" (UniqueName: \"kubernetes.io/projected/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-kube-api-access-dch7q\") pod \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\" (UID: \"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.283859 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-kube-api-access-dch7q" (OuterVolumeSpecName: "kube-api-access-dch7q") pod "0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" (UID: "0abc9996-dcbd-4d8c-9b25-079a16b6b5e0"). InnerVolumeSpecName "kube-api-access-dch7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.289542 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.238:5671: connect: connection refused" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.299841 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-config" (OuterVolumeSpecName: "config") pod "0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" (UID: "0abc9996-dcbd-4d8c-9b25-079a16b6b5e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.302021 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" (UID: "0abc9996-dcbd-4d8c-9b25-079a16b6b5e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.346995 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.367123 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.367188 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.367200 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dch7q\" (UniqueName: \"kubernetes.io/projected/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0-kube-api-access-dch7q\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.468579 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-plugins-conf\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.468659 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-tls\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.468680 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-config-data\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.468821 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxpg\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-kube-api-access-4zxpg\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469299 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-erlang-cookie-secret\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469336 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-erlang-cookie\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469366 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-server-conf\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469384 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-plugins\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469438 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-pod-info\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469552 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469575 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.469637 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-confd\") pod \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\" (UID: \"c9b69ca6-bdea-4c56-8c4a-66d030cf7917\") " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.470013 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.470058 4903 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.470633 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.471880 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-kube-api-access-4zxpg" (OuterVolumeSpecName: "kube-api-access-4zxpg") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "kube-api-access-4zxpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.472540 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.473171 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-pod-info" (OuterVolumeSpecName: "pod-info") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.475938 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.480793 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974" (OuterVolumeSpecName: "persistence") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.488655 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-config-data" (OuterVolumeSpecName: "config-data") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.506455 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-server-conf" (OuterVolumeSpecName: "server-conf") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.543755 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c9b69ca6-bdea-4c56-8c4a-66d030cf7917" (UID: "c9b69ca6-bdea-4c56-8c4a-66d030cf7917"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.563796 4903 generic.go:334] "Generic (PLEG): container finished" podID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerID="8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212" exitCode=0 Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.563846 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.563872 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9b69ca6-bdea-4c56-8c4a-66d030cf7917","Type":"ContainerDied","Data":"8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212"} Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.563902 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9b69ca6-bdea-4c56-8c4a-66d030cf7917","Type":"ContainerDied","Data":"884ede21c13d281223ed5e88a7cefb5208733775117d755e9cb2f3d52e56df16"} Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.563921 4903 scope.go:117] "RemoveContainer" containerID="8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.566976 4903 generic.go:334] "Generic (PLEG): container finished" podID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerID="8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda" exitCode=0 Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.567045 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.567038 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" event={"ID":"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0","Type":"ContainerDied","Data":"8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda"} Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.567168 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-w8sx7" event={"ID":"0abc9996-dcbd-4d8c-9b25-079a16b6b5e0","Type":"ContainerDied","Data":"4e9646a7d9995e558ab0b3f6835e5295b20b777d4b939caf6e0b0c5114aa80b4"} Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.570953 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.570976 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.570987 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.570997 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zxpg\" (UniqueName: \"kubernetes.io/projected/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-kube-api-access-4zxpg\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.571007 4903 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.571016 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.571025 4903 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.571033 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.571040 4903 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9b69ca6-bdea-4c56-8c4a-66d030cf7917-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.571609 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") on node \"crc\" " Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.585149 4903 scope.go:117] "RemoveContainer" containerID="243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.607335 4903 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.607599 4903 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974") on node "crc" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.616396 4903 scope.go:117] "RemoveContainer" containerID="8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212" Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.616880 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212\": container with ID starting with 8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212 not found: ID does not exist" containerID="8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.616935 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212"} err="failed to get container status \"8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212\": rpc error: code = NotFound desc = could not find container \"8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212\": container with ID starting with 8729f20d8274cb286e393f1e86e559c9d9873800c37540334b0df809a5cb5212 not found: ID does not exist" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.616962 4903 scope.go:117] "RemoveContainer" containerID="243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.619141 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-w8sx7"] Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.620436 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce\": container with ID starting with 243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce not found: ID does not exist" containerID="243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.620507 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce"} err="failed to get container status \"243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce\": rpc error: code = NotFound desc = could not find container \"243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce\": container with ID starting with 243fd9ff98bff897f3cba3eb2eb2b9884b0de1b30fcf4cab16765cfe440b57ce not found: ID does not exist" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.620573 4903 scope.go:117] "RemoveContainer" containerID="8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.632468 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-w8sx7"] Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.651948 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.663473 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.663578 4903 scope.go:117] "RemoveContainer" containerID="0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.670139 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.670770 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerName="setup-container" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.670935 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerName="setup-container" Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.670959 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerName="dnsmasq-dns" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.670966 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerName="dnsmasq-dns" Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.670982 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerName="init" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.670989 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerName="init" Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.671007 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerName="rabbitmq" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.671014 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerName="rabbitmq" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.671172 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" containerName="dnsmasq-dns" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.671196 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" containerName="rabbitmq" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.672119 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.673672 4903 reconciler_common.go:293] "Volume detached for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.674807 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.675494 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.675692 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.675787 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.676079 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.676233 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tshzg" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.684161 4903 scope.go:117] "RemoveContainer" containerID="8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.684219 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.685817 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda\": container with ID starting with 8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda not found: ID does not exist" containerID="8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.685889 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda"} err="failed to get container status \"8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda\": rpc error: code = NotFound desc = could not find container \"8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda\": container with ID starting with 8144ea0375d99e68ea6e9b6534883e1e21ea23189e2a05540aeef86af57cdcda not found: ID does not exist" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.685915 4903 scope.go:117] "RemoveContainer" containerID="0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.686006 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:09:48 crc kubenswrapper[4903]: E0128 17:09:48.686593 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538\": container with ID starting with 0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538 not found: ID does not exist" containerID="0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.686672 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538"} err="failed to get container status \"0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538\": rpc error: code = NotFound desc = could not find container \"0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538\": container with ID starting with 0a68410782483bbcd41f5ab1d045f8e1ec5624d88f01ac9d3aad844d277b2538 not found: ID does not exist" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775040 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775213 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775343 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775403 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775454 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775493 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775513 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775556 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhz28\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-kube-api-access-xhz28\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775675 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-config-data\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775767 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e11136e-abe3-4027-8fdd-c992cf92b52e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.775783 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e11136e-abe3-4027-8fdd-c992cf92b52e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.877588 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e11136e-abe3-4027-8fdd-c992cf92b52e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.877646 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e11136e-abe3-4027-8fdd-c992cf92b52e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.877688 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.877714 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.877757 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.877781 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.878234 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.878286 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.878312 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.878339 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhz28\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-kube-api-access-xhz28\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.878476 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-config-data\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.984905 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.984986 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fc2803564cd4572c17781a518f92c5cad76f1e3586297d676207076497b1b22b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 17:09:48 crc kubenswrapper[4903]: I0128 17:09:48.986087 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.014330 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.014749 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.014951 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.015145 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.015424 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6e11136e-abe3-4027-8fdd-c992cf92b52e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.016085 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6e11136e-abe3-4027-8fdd-c992cf92b52e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.019733 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhz28\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-kube-api-access-xhz28\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.021095 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e11136e-abe3-4027-8fdd-c992cf92b52e-config-data\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.021165 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6e11136e-abe3-4027-8fdd-c992cf92b52e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.218403 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-440bdbde-ddf5-45e8-b16c-fe61c112c974\") pod \"rabbitmq-server-0\" (UID: \"6e11136e-abe3-4027-8fdd-c992cf92b52e\") " pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.291373 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.495838 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.584115 4903 generic.go:334] "Generic (PLEG): container finished" podID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerID="9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287" exitCode=0 Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.584187 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c","Type":"ContainerDied","Data":"9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287"} Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.584211 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c","Type":"ContainerDied","Data":"f03e6d9105b2850cfad30ecceb60385936527e89aa6e4cc9f24bf9f0058d84a2"} Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.584223 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.584232 4903 scope.go:117] "RemoveContainer" containerID="9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.610163 4903 scope.go:117] "RemoveContainer" containerID="79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.630235 4903 scope.go:117] "RemoveContainer" containerID="9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287" Jan 28 17:09:49 crc kubenswrapper[4903]: E0128 17:09:49.630785 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287\": container with ID starting with 9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287 not found: ID does not exist" containerID="9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.630838 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287"} err="failed to get container status \"9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287\": rpc error: code = NotFound desc = could not find container \"9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287\": container with ID starting with 9ab71c09c0d928cfbbc6c91f4187a9dccf08385727c1e6034de3c548eaa0a287 not found: ID does not exist" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.630871 4903 scope.go:117] "RemoveContainer" containerID="79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56" Jan 28 17:09:49 crc kubenswrapper[4903]: E0128 17:09:49.631267 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56\": container with ID starting with 79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56 not found: ID does not exist" containerID="79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.631306 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56"} err="failed to get container status \"79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56\": rpc error: code = NotFound desc = could not find container \"79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56\": container with ID starting with 79e9e6796d7a1d9729580e78f63d3ce0f58b9ad5710a4749a74121215340ff56 not found: ID does not exist" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692618 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-tls\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692695 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-plugins\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692765 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-confd\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692801 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptc64\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-kube-api-access-ptc64\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692872 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-erlang-cookie-secret\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692899 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-plugins-conf\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692929 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-pod-info\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.692955 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-erlang-cookie\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.693081 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.693138 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-config-data\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.693182 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-server-conf\") pod \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\" (UID: \"1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c\") " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.694246 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.694334 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.695595 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.698277 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-pod-info" (OuterVolumeSpecName: "pod-info") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.698339 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.698373 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-kube-api-access-ptc64" (OuterVolumeSpecName: "kube-api-access-ptc64") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "kube-api-access-ptc64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.698483 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.704202 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3" (OuterVolumeSpecName: "persistence") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "pvc-6024bef6-4b48-4259-9740-2da38fb716b3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.712736 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-config-data" (OuterVolumeSpecName: "config-data") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.729301 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-server-conf" (OuterVolumeSpecName: "server-conf") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.758839 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" (UID: "1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.794959 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795193 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795281 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptc64\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-kube-api-access-ptc64\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795364 4903 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795456 4903 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795557 4903 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795649 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795766 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") on node \"crc\" " Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795855 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.795986 4903 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.796074 4903 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.796661 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.810981 4903 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.811219 4903 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6024bef6-4b48-4259-9740-2da38fb716b3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3") on node "crc" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.897225 4903 reconciler_common.go:293] "Volume detached for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") on node \"crc\" DevicePath \"\"" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.945307 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.950187 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.971679 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:09:49 crc kubenswrapper[4903]: E0128 17:09:49.972057 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerName="setup-container" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.972078 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerName="setup-container" Jan 28 17:09:49 crc kubenswrapper[4903]: E0128 17:09:49.972115 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerName="rabbitmq" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.972124 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerName="rabbitmq" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.972305 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" containerName="rabbitmq" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.973226 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.976424 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.977736 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.977950 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.978074 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.978251 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.978407 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-f7gmf" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.978557 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 17:09:49 crc kubenswrapper[4903]: I0128 17:09:49.983079 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.100737 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjz68\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-kube-api-access-rjz68\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101150 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101401 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101476 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101523 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101561 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101673 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a21b2be1-6894-442f-8f4a-6a396becbfa9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101718 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101759 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101824 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.101855 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a21b2be1-6894-442f-8f4a-6a396becbfa9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203618 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203689 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a21b2be1-6894-442f-8f4a-6a396becbfa9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203725 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjz68\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-kube-api-access-rjz68\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203751 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203781 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203800 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203819 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203837 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203878 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a21b2be1-6894-442f-8f4a-6a396becbfa9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203916 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.203959 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.205100 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.205335 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.205481 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.205509 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.205821 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a21b2be1-6894-442f-8f4a-6a396becbfa9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.206891 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.206928 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1401f4aa03b3a6aa45d828a1f682f335d8793ad57f5468fd46ec0a0c7cab6871/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.208152 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a21b2be1-6894-442f-8f4a-6a396becbfa9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.208383 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.209721 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.218241 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a21b2be1-6894-442f-8f4a-6a396becbfa9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.229145 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjz68\" (UniqueName: \"kubernetes.io/projected/a21b2be1-6894-442f-8f4a-6a396becbfa9-kube-api-access-rjz68\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.242737 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6024bef6-4b48-4259-9740-2da38fb716b3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6024bef6-4b48-4259-9740-2da38fb716b3\") pod \"rabbitmq-cell1-server-0\" (UID: \"a21b2be1-6894-442f-8f4a-6a396becbfa9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.294642 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.427356 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abc9996-dcbd-4d8c-9b25-079a16b6b5e0" path="/var/lib/kubelet/pods/0abc9996-dcbd-4d8c-9b25-079a16b6b5e0/volumes" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.428498 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c" path="/var/lib/kubelet/pods/1a04f428-2a31-4bc7-a1bc-a0830d6a3e8c/volumes" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.429851 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b69ca6-bdea-4c56-8c4a-66d030cf7917" path="/var/lib/kubelet/pods/c9b69ca6-bdea-4c56-8c4a-66d030cf7917/volumes" Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.594639 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e11136e-abe3-4027-8fdd-c992cf92b52e","Type":"ContainerStarted","Data":"628fc5797710bf71a976bb3356a67fe9542e0daed045a4d6ea317d6d32f13dda"} Jan 28 17:09:50 crc kubenswrapper[4903]: I0128 17:09:50.725229 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 17:09:50 crc kubenswrapper[4903]: W0128 17:09:50.783858 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda21b2be1_6894_442f_8f4a_6a396becbfa9.slice/crio-060aee945b4fabeb54902690b3148fc7f3062f4f5cfac6d2d35d2a4a3f90cd50 WatchSource:0}: Error finding container 060aee945b4fabeb54902690b3148fc7f3062f4f5cfac6d2d35d2a4a3f90cd50: Status 404 returned error can't find the container with id 060aee945b4fabeb54902690b3148fc7f3062f4f5cfac6d2d35d2a4a3f90cd50 Jan 28 17:09:51 crc kubenswrapper[4903]: I0128 17:09:51.602990 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a21b2be1-6894-442f-8f4a-6a396becbfa9","Type":"ContainerStarted","Data":"060aee945b4fabeb54902690b3148fc7f3062f4f5cfac6d2d35d2a4a3f90cd50"} Jan 28 17:09:51 crc kubenswrapper[4903]: I0128 17:09:51.605015 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e11136e-abe3-4027-8fdd-c992cf92b52e","Type":"ContainerStarted","Data":"450e6a5bb0526cc9ea117b7f2eff3ae45f81f96e51ebd80e162ac6ec6f0bb302"} Jan 28 17:09:52 crc kubenswrapper[4903]: I0128 17:09:52.612704 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a21b2be1-6894-442f-8f4a-6a396becbfa9","Type":"ContainerStarted","Data":"d64c65889fadfbf2b456aef6b92a0224bea4a0cbdc2b11d3ed0edab45b033a92"} Jan 28 17:09:56 crc kubenswrapper[4903]: I0128 17:09:56.413081 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:09:56 crc kubenswrapper[4903]: E0128 17:09:56.413621 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:10:11 crc kubenswrapper[4903]: I0128 17:10:11.413235 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:10:11 crc kubenswrapper[4903]: E0128 17:10:11.414780 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:10:23 crc kubenswrapper[4903]: I0128 17:10:23.413696 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:10:23 crc kubenswrapper[4903]: E0128 17:10:23.414526 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:10:23 crc kubenswrapper[4903]: I0128 17:10:23.856060 4903 generic.go:334] "Generic (PLEG): container finished" podID="a21b2be1-6894-442f-8f4a-6a396becbfa9" containerID="d64c65889fadfbf2b456aef6b92a0224bea4a0cbdc2b11d3ed0edab45b033a92" exitCode=0 Jan 28 17:10:23 crc kubenswrapper[4903]: I0128 17:10:23.856145 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a21b2be1-6894-442f-8f4a-6a396becbfa9","Type":"ContainerDied","Data":"d64c65889fadfbf2b456aef6b92a0224bea4a0cbdc2b11d3ed0edab45b033a92"} Jan 28 17:10:23 crc kubenswrapper[4903]: I0128 17:10:23.859357 4903 generic.go:334] "Generic (PLEG): container finished" podID="6e11136e-abe3-4027-8fdd-c992cf92b52e" containerID="450e6a5bb0526cc9ea117b7f2eff3ae45f81f96e51ebd80e162ac6ec6f0bb302" exitCode=0 Jan 28 17:10:23 crc kubenswrapper[4903]: I0128 17:10:23.859400 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e11136e-abe3-4027-8fdd-c992cf92b52e","Type":"ContainerDied","Data":"450e6a5bb0526cc9ea117b7f2eff3ae45f81f96e51ebd80e162ac6ec6f0bb302"} Jan 28 17:10:24 crc kubenswrapper[4903]: I0128 17:10:24.869804 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6e11136e-abe3-4027-8fdd-c992cf92b52e","Type":"ContainerStarted","Data":"627700ba10fc8247c2ad2cae40f0d00023d91c90e2c69d51a09b3d14874ecb58"} Jan 28 17:10:24 crc kubenswrapper[4903]: I0128 17:10:24.870458 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 17:10:24 crc kubenswrapper[4903]: I0128 17:10:24.875230 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a21b2be1-6894-442f-8f4a-6a396becbfa9","Type":"ContainerStarted","Data":"324102bb3ba318b220e2ad62b0e5280e91f2454bbce4faa68fab4e70fc5631ce"} Jan 28 17:10:24 crc kubenswrapper[4903]: I0128 17:10:24.876003 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:10:24 crc kubenswrapper[4903]: I0128 17:10:24.901345 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.901328065 podStartE2EDuration="36.901328065s" podCreationTimestamp="2026-01-28 17:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:10:24.898720485 +0000 UTC m=+5097.174692006" watchObservedRunningTime="2026-01-28 17:10:24.901328065 +0000 UTC m=+5097.177299576" Jan 28 17:10:24 crc kubenswrapper[4903]: I0128 17:10:24.924488 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.924469876 podStartE2EDuration="35.924469876s" podCreationTimestamp="2026-01-28 17:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:10:24.921470684 +0000 UTC m=+5097.197442205" watchObservedRunningTime="2026-01-28 17:10:24.924469876 +0000 UTC m=+5097.200441377" Jan 28 17:10:37 crc kubenswrapper[4903]: I0128 17:10:37.414716 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:10:37 crc kubenswrapper[4903]: E0128 17:10:37.415408 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:10:39 crc kubenswrapper[4903]: I0128 17:10:39.295879 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 17:10:40 crc kubenswrapper[4903]: I0128 17:10:40.297767 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.014844 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.016682 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.021043 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hvcfc" Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.027794 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.163141 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvx47\" (UniqueName: \"kubernetes.io/projected/9e039526-9d98-4d02-9598-7994a66ca810-kube-api-access-tvx47\") pod \"mariadb-client\" (UID: \"9e039526-9d98-4d02-9598-7994a66ca810\") " pod="openstack/mariadb-client" Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.264414 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvx47\" (UniqueName: \"kubernetes.io/projected/9e039526-9d98-4d02-9598-7994a66ca810-kube-api-access-tvx47\") pod \"mariadb-client\" (UID: \"9e039526-9d98-4d02-9598-7994a66ca810\") " pod="openstack/mariadb-client" Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.294247 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvx47\" (UniqueName: \"kubernetes.io/projected/9e039526-9d98-4d02-9598-7994a66ca810-kube-api-access-tvx47\") pod \"mariadb-client\" (UID: \"9e039526-9d98-4d02-9598-7994a66ca810\") " pod="openstack/mariadb-client" Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.338255 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.824929 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:10:44 crc kubenswrapper[4903]: I0128 17:10:44.829547 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:10:45 crc kubenswrapper[4903]: I0128 17:10:45.027370 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9e039526-9d98-4d02-9598-7994a66ca810","Type":"ContainerStarted","Data":"b9fbeca01860930df8c8f53bef7f44afa69ec541536ab6586fadbd8f4164c928"} Jan 28 17:10:47 crc kubenswrapper[4903]: I0128 17:10:47.040630 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9e039526-9d98-4d02-9598-7994a66ca810","Type":"ContainerStarted","Data":"8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b"} Jan 28 17:10:47 crc kubenswrapper[4903]: I0128 17:10:47.056132 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=3.106126115 podStartE2EDuration="4.056111713s" podCreationTimestamp="2026-01-28 17:10:43 +0000 UTC" firstStartedPulling="2026-01-28 17:10:44.829318013 +0000 UTC m=+5117.105289514" lastFinishedPulling="2026-01-28 17:10:45.779303591 +0000 UTC m=+5118.055275112" observedRunningTime="2026-01-28 17:10:47.052679749 +0000 UTC m=+5119.328651270" watchObservedRunningTime="2026-01-28 17:10:47.056111713 +0000 UTC m=+5119.332083224" Jan 28 17:10:48 crc kubenswrapper[4903]: I0128 17:10:48.417924 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:10:48 crc kubenswrapper[4903]: E0128 17:10:48.418183 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:10:59 crc kubenswrapper[4903]: I0128 17:10:59.129787 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:10:59 crc kubenswrapper[4903]: I0128 17:10:59.130496 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="9e039526-9d98-4d02-9598-7994a66ca810" containerName="mariadb-client" containerID="cri-o://8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b" gracePeriod=30 Jan 28 17:10:59 crc kubenswrapper[4903]: I0128 17:10:59.587328 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:10:59 crc kubenswrapper[4903]: I0128 17:10:59.720576 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvx47\" (UniqueName: \"kubernetes.io/projected/9e039526-9d98-4d02-9598-7994a66ca810-kube-api-access-tvx47\") pod \"9e039526-9d98-4d02-9598-7994a66ca810\" (UID: \"9e039526-9d98-4d02-9598-7994a66ca810\") " Jan 28 17:10:59 crc kubenswrapper[4903]: I0128 17:10:59.725610 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e039526-9d98-4d02-9598-7994a66ca810-kube-api-access-tvx47" (OuterVolumeSpecName: "kube-api-access-tvx47") pod "9e039526-9d98-4d02-9598-7994a66ca810" (UID: "9e039526-9d98-4d02-9598-7994a66ca810"). InnerVolumeSpecName "kube-api-access-tvx47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:10:59 crc kubenswrapper[4903]: I0128 17:10:59.822741 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvx47\" (UniqueName: \"kubernetes.io/projected/9e039526-9d98-4d02-9598-7994a66ca810-kube-api-access-tvx47\") on node \"crc\" DevicePath \"\"" Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.147251 4903 generic.go:334] "Generic (PLEG): container finished" podID="9e039526-9d98-4d02-9598-7994a66ca810" containerID="8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b" exitCode=143 Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.147317 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9e039526-9d98-4d02-9598-7994a66ca810","Type":"ContainerDied","Data":"8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b"} Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.147349 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.147385 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"9e039526-9d98-4d02-9598-7994a66ca810","Type":"ContainerDied","Data":"b9fbeca01860930df8c8f53bef7f44afa69ec541536ab6586fadbd8f4164c928"} Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.147436 4903 scope.go:117] "RemoveContainer" containerID="8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b" Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.165992 4903 scope.go:117] "RemoveContainer" containerID="8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b" Jan 28 17:11:00 crc kubenswrapper[4903]: E0128 17:11:00.166459 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b\": container with ID starting with 8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b not found: ID does not exist" containerID="8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b" Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.166519 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b"} err="failed to get container status \"8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b\": rpc error: code = NotFound desc = could not find container \"8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b\": container with ID starting with 8d49e9de63d24daf45b61a818e6b1e8178714ac2eec0bacfeec762f1726d7e4b not found: ID does not exist" Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.196480 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.203370 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:11:00 crc kubenswrapper[4903]: I0128 17:11:00.421827 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e039526-9d98-4d02-9598-7994a66ca810" path="/var/lib/kubelet/pods/9e039526-9d98-4d02-9598-7994a66ca810/volumes" Jan 28 17:11:02 crc kubenswrapper[4903]: I0128 17:11:02.413320 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:11:02 crc kubenswrapper[4903]: E0128 17:11:02.413955 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:11:13 crc kubenswrapper[4903]: I0128 17:11:13.413828 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:11:13 crc kubenswrapper[4903]: E0128 17:11:13.414510 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:11:27 crc kubenswrapper[4903]: I0128 17:11:27.413877 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:11:27 crc kubenswrapper[4903]: E0128 17:11:27.414685 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:11:42 crc kubenswrapper[4903]: I0128 17:11:42.413573 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:11:42 crc kubenswrapper[4903]: E0128 17:11:42.414596 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:11:54 crc kubenswrapper[4903]: I0128 17:11:54.413489 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:11:54 crc kubenswrapper[4903]: E0128 17:11:54.414429 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:12:09 crc kubenswrapper[4903]: I0128 17:12:09.413860 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:12:09 crc kubenswrapper[4903]: E0128 17:12:09.414685 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:12:22 crc kubenswrapper[4903]: I0128 17:12:22.414272 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:12:22 crc kubenswrapper[4903]: E0128 17:12:22.415159 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:12:37 crc kubenswrapper[4903]: I0128 17:12:37.414033 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:12:37 crc kubenswrapper[4903]: E0128 17:12:37.414771 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:12:51 crc kubenswrapper[4903]: I0128 17:12:51.413739 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:12:51 crc kubenswrapper[4903]: E0128 17:12:51.414525 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:12:53 crc kubenswrapper[4903]: I0128 17:12:53.447439 4903 scope.go:117] "RemoveContainer" containerID="6552795f881fdafbf0d4f31aeaedf1f97397c010f172d6d47b50feebdc1acba4" Jan 28 17:13:04 crc kubenswrapper[4903]: I0128 17:13:04.414085 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:13:04 crc kubenswrapper[4903]: E0128 17:13:04.414886 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.821609 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-njlw4"] Jan 28 17:13:05 crc kubenswrapper[4903]: E0128 17:13:05.822390 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e039526-9d98-4d02-9598-7994a66ca810" containerName="mariadb-client" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.822407 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e039526-9d98-4d02-9598-7994a66ca810" containerName="mariadb-client" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.822606 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e039526-9d98-4d02-9598-7994a66ca810" containerName="mariadb-client" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.823831 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.836895 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njlw4"] Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.887544 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-utilities\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.887602 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rl5\" (UniqueName: \"kubernetes.io/projected/ffa1ff4f-9a02-4615-b292-edb8ccde156b-kube-api-access-25rl5\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.887634 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-catalog-content\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.988896 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-utilities\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.989169 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rl5\" (UniqueName: \"kubernetes.io/projected/ffa1ff4f-9a02-4615-b292-edb8ccde156b-kube-api-access-25rl5\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.989292 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-catalog-content\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.989484 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-utilities\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:05 crc kubenswrapper[4903]: I0128 17:13:05.989610 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-catalog-content\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:06 crc kubenswrapper[4903]: I0128 17:13:06.012415 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rl5\" (UniqueName: \"kubernetes.io/projected/ffa1ff4f-9a02-4615-b292-edb8ccde156b-kube-api-access-25rl5\") pod \"redhat-operators-njlw4\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:06 crc kubenswrapper[4903]: I0128 17:13:06.185769 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:06 crc kubenswrapper[4903]: I0128 17:13:06.618815 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njlw4"] Jan 28 17:13:07 crc kubenswrapper[4903]: I0128 17:13:07.020818 4903 generic.go:334] "Generic (PLEG): container finished" podID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerID="e00f0c76937f59784a336da518b9fa3cce19c0fde87007dc4d1315aaa0cdce11" exitCode=0 Jan 28 17:13:07 crc kubenswrapper[4903]: I0128 17:13:07.020974 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njlw4" event={"ID":"ffa1ff4f-9a02-4615-b292-edb8ccde156b","Type":"ContainerDied","Data":"e00f0c76937f59784a336da518b9fa3cce19c0fde87007dc4d1315aaa0cdce11"} Jan 28 17:13:07 crc kubenswrapper[4903]: I0128 17:13:07.021134 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njlw4" event={"ID":"ffa1ff4f-9a02-4615-b292-edb8ccde156b","Type":"ContainerStarted","Data":"832837bc7e32209d68c6f55b3c27cec9facf3153d9aea8c0fcbbbaf52111b324"} Jan 28 17:13:09 crc kubenswrapper[4903]: I0128 17:13:09.034756 4903 generic.go:334] "Generic (PLEG): container finished" podID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerID="c42f9a09ed259526537aeaf4f943a46f69875934135d0215847201c69b619706" exitCode=0 Jan 28 17:13:09 crc kubenswrapper[4903]: I0128 17:13:09.034833 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njlw4" event={"ID":"ffa1ff4f-9a02-4615-b292-edb8ccde156b","Type":"ContainerDied","Data":"c42f9a09ed259526537aeaf4f943a46f69875934135d0215847201c69b619706"} Jan 28 17:13:10 crc kubenswrapper[4903]: I0128 17:13:10.044350 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njlw4" event={"ID":"ffa1ff4f-9a02-4615-b292-edb8ccde156b","Type":"ContainerStarted","Data":"be388d19f9bd0b4ad0dfbbdd977dd7f49c3b9e97abbb2396d36775cac24bb6be"} Jan 28 17:13:10 crc kubenswrapper[4903]: I0128 17:13:10.067700 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-njlw4" podStartSLOduration=2.541270474 podStartE2EDuration="5.067681695s" podCreationTimestamp="2026-01-28 17:13:05 +0000 UTC" firstStartedPulling="2026-01-28 17:13:07.023773727 +0000 UTC m=+5259.299745238" lastFinishedPulling="2026-01-28 17:13:09.550184938 +0000 UTC m=+5261.826156459" observedRunningTime="2026-01-28 17:13:10.063361208 +0000 UTC m=+5262.339332719" watchObservedRunningTime="2026-01-28 17:13:10.067681695 +0000 UTC m=+5262.343653206" Jan 28 17:13:16 crc kubenswrapper[4903]: I0128 17:13:16.186848 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:16 crc kubenswrapper[4903]: I0128 17:13:16.187444 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:16 crc kubenswrapper[4903]: I0128 17:13:16.260963 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:17 crc kubenswrapper[4903]: I0128 17:13:17.155680 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:17 crc kubenswrapper[4903]: I0128 17:13:17.213858 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njlw4"] Jan 28 17:13:18 crc kubenswrapper[4903]: I0128 17:13:18.418130 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:13:18 crc kubenswrapper[4903]: E0128 17:13:18.418679 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:13:19 crc kubenswrapper[4903]: I0128 17:13:19.114809 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-njlw4" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="registry-server" containerID="cri-o://be388d19f9bd0b4ad0dfbbdd977dd7f49c3b9e97abbb2396d36775cac24bb6be" gracePeriod=2 Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.134235 4903 generic.go:334] "Generic (PLEG): container finished" podID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerID="be388d19f9bd0b4ad0dfbbdd977dd7f49c3b9e97abbb2396d36775cac24bb6be" exitCode=0 Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.134339 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njlw4" event={"ID":"ffa1ff4f-9a02-4615-b292-edb8ccde156b","Type":"ContainerDied","Data":"be388d19f9bd0b4ad0dfbbdd977dd7f49c3b9e97abbb2396d36775cac24bb6be"} Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.590137 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.726334 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-catalog-content\") pod \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.726684 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25rl5\" (UniqueName: \"kubernetes.io/projected/ffa1ff4f-9a02-4615-b292-edb8ccde156b-kube-api-access-25rl5\") pod \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.726840 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-utilities\") pod \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\" (UID: \"ffa1ff4f-9a02-4615-b292-edb8ccde156b\") " Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.730685 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-utilities" (OuterVolumeSpecName: "utilities") pod "ffa1ff4f-9a02-4615-b292-edb8ccde156b" (UID: "ffa1ff4f-9a02-4615-b292-edb8ccde156b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.735994 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa1ff4f-9a02-4615-b292-edb8ccde156b-kube-api-access-25rl5" (OuterVolumeSpecName: "kube-api-access-25rl5") pod "ffa1ff4f-9a02-4615-b292-edb8ccde156b" (UID: "ffa1ff4f-9a02-4615-b292-edb8ccde156b"). InnerVolumeSpecName "kube-api-access-25rl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.828700 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25rl5\" (UniqueName: \"kubernetes.io/projected/ffa1ff4f-9a02-4615-b292-edb8ccde156b-kube-api-access-25rl5\") on node \"crc\" DevicePath \"\"" Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.828733 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.890563 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffa1ff4f-9a02-4615-b292-edb8ccde156b" (UID: "ffa1ff4f-9a02-4615-b292-edb8ccde156b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:13:21 crc kubenswrapper[4903]: I0128 17:13:21.930466 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffa1ff4f-9a02-4615-b292-edb8ccde156b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.146896 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njlw4" event={"ID":"ffa1ff4f-9a02-4615-b292-edb8ccde156b","Type":"ContainerDied","Data":"832837bc7e32209d68c6f55b3c27cec9facf3153d9aea8c0fcbbbaf52111b324"} Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.147027 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njlw4" Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.148304 4903 scope.go:117] "RemoveContainer" containerID="be388d19f9bd0b4ad0dfbbdd977dd7f49c3b9e97abbb2396d36775cac24bb6be" Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.165617 4903 scope.go:117] "RemoveContainer" containerID="c42f9a09ed259526537aeaf4f943a46f69875934135d0215847201c69b619706" Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.192808 4903 scope.go:117] "RemoveContainer" containerID="e00f0c76937f59784a336da518b9fa3cce19c0fde87007dc4d1315aaa0cdce11" Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.244134 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njlw4"] Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.253218 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-njlw4"] Jan 28 17:13:22 crc kubenswrapper[4903]: I0128 17:13:22.423956 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" path="/var/lib/kubelet/pods/ffa1ff4f-9a02-4615-b292-edb8ccde156b/volumes" Jan 28 17:13:30 crc kubenswrapper[4903]: I0128 17:13:30.414372 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:13:30 crc kubenswrapper[4903]: E0128 17:13:30.414895 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:13:44 crc kubenswrapper[4903]: I0128 17:13:44.415834 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:13:44 crc kubenswrapper[4903]: E0128 17:13:44.417055 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:13:55 crc kubenswrapper[4903]: I0128 17:13:55.414141 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:13:55 crc kubenswrapper[4903]: E0128 17:13:55.415027 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:14:06 crc kubenswrapper[4903]: I0128 17:14:06.413506 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:14:07 crc kubenswrapper[4903]: I0128 17:14:07.519099 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"7378b6481e12992f0a6ba3f03ca88e1ce24c2396b78c53e3ea7dd86651deb56a"} Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.770877 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 28 17:14:35 crc kubenswrapper[4903]: E0128 17:14:35.771893 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="registry-server" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.771911 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="registry-server" Jan 28 17:14:35 crc kubenswrapper[4903]: E0128 17:14:35.771929 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="extract-utilities" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.771939 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="extract-utilities" Jan 28 17:14:35 crc kubenswrapper[4903]: E0128 17:14:35.771956 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="extract-content" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.771965 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="extract-content" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.772164 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa1ff4f-9a02-4615-b292-edb8ccde156b" containerName="registry-server" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.772797 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.776766 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.776909 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hvcfc" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.927676 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7wd\" (UniqueName: \"kubernetes.io/projected/298e63ce-8f8c-4ff3-831a-89771211fb4a-kube-api-access-tm7wd\") pod \"mariadb-copy-data\" (UID: \"298e63ce-8f8c-4ff3-831a-89771211fb4a\") " pod="openstack/mariadb-copy-data" Jan 28 17:14:35 crc kubenswrapper[4903]: I0128 17:14:35.928049 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\") pod \"mariadb-copy-data\" (UID: \"298e63ce-8f8c-4ff3-831a-89771211fb4a\") " pod="openstack/mariadb-copy-data" Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.030076 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7wd\" (UniqueName: \"kubernetes.io/projected/298e63ce-8f8c-4ff3-831a-89771211fb4a-kube-api-access-tm7wd\") pod \"mariadb-copy-data\" (UID: \"298e63ce-8f8c-4ff3-831a-89771211fb4a\") " pod="openstack/mariadb-copy-data" Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.030258 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\") pod \"mariadb-copy-data\" (UID: \"298e63ce-8f8c-4ff3-831a-89771211fb4a\") " pod="openstack/mariadb-copy-data" Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.034444 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.034618 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\") pod \"mariadb-copy-data\" (UID: \"298e63ce-8f8c-4ff3-831a-89771211fb4a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f668f0b7583d9a57213d3918519241a888a8670b5159b9455811f22efa395099/globalmount\"" pod="openstack/mariadb-copy-data" Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.054819 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7wd\" (UniqueName: \"kubernetes.io/projected/298e63ce-8f8c-4ff3-831a-89771211fb4a-kube-api-access-tm7wd\") pod \"mariadb-copy-data\" (UID: \"298e63ce-8f8c-4ff3-831a-89771211fb4a\") " pod="openstack/mariadb-copy-data" Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.064741 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76d5e94f-73ac-4e8d-a4dc-7b674e569224\") pod \"mariadb-copy-data\" (UID: \"298e63ce-8f8c-4ff3-831a-89771211fb4a\") " pod="openstack/mariadb-copy-data" Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.119509 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.716688 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 28 17:14:36 crc kubenswrapper[4903]: I0128 17:14:36.865511 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"298e63ce-8f8c-4ff3-831a-89771211fb4a","Type":"ContainerStarted","Data":"cd236ea9f86b82eacc1e2aaefb09f67159e652c33fce5ea110d50e95046b423f"} Jan 28 17:14:37 crc kubenswrapper[4903]: I0128 17:14:37.872809 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"298e63ce-8f8c-4ff3-831a-89771211fb4a","Type":"ContainerStarted","Data":"93b37c592706484a7a85369841e7028c1774b708cb6a1bbfd7a4bfb1e4abfb30"} Jan 28 17:14:37 crc kubenswrapper[4903]: I0128 17:14:37.889254 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.88924001 podStartE2EDuration="3.88924001s" podCreationTimestamp="2026-01-28 17:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:14:37.887061362 +0000 UTC m=+5350.163032873" watchObservedRunningTime="2026-01-28 17:14:37.88924001 +0000 UTC m=+5350.165211521" Jan 28 17:14:40 crc kubenswrapper[4903]: I0128 17:14:40.685788 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:40 crc kubenswrapper[4903]: I0128 17:14:40.687068 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:14:40 crc kubenswrapper[4903]: I0128 17:14:40.694058 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:40 crc kubenswrapper[4903]: I0128 17:14:40.702190 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr95z\" (UniqueName: \"kubernetes.io/projected/4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29-kube-api-access-pr95z\") pod \"mariadb-client\" (UID: \"4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29\") " pod="openstack/mariadb-client" Jan 28 17:14:40 crc kubenswrapper[4903]: I0128 17:14:40.803754 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr95z\" (UniqueName: \"kubernetes.io/projected/4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29-kube-api-access-pr95z\") pod \"mariadb-client\" (UID: \"4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29\") " pod="openstack/mariadb-client" Jan 28 17:14:40 crc kubenswrapper[4903]: I0128 17:14:40.821211 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr95z\" (UniqueName: \"kubernetes.io/projected/4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29-kube-api-access-pr95z\") pod \"mariadb-client\" (UID: \"4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29\") " pod="openstack/mariadb-client" Jan 28 17:14:41 crc kubenswrapper[4903]: I0128 17:14:41.013920 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:14:41 crc kubenswrapper[4903]: I0128 17:14:41.487925 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:41 crc kubenswrapper[4903]: W0128 17:14:41.490935 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f2a1b42_820f_45de_8cb9_1e7c5a0e3b29.slice/crio-d95488f85b7d8d80bdbf49d4e3be5e381c094e175359379bfe4eef66c1834671 WatchSource:0}: Error finding container d95488f85b7d8d80bdbf49d4e3be5e381c094e175359379bfe4eef66c1834671: Status 404 returned error can't find the container with id d95488f85b7d8d80bdbf49d4e3be5e381c094e175359379bfe4eef66c1834671 Jan 28 17:14:41 crc kubenswrapper[4903]: I0128 17:14:41.909817 4903 generic.go:334] "Generic (PLEG): container finished" podID="4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29" containerID="5719e287e19d1aada1feb99ada60f33ea5c12f2915d4d7dc5daf45de50895285" exitCode=0 Jan 28 17:14:41 crc kubenswrapper[4903]: I0128 17:14:41.909918 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29","Type":"ContainerDied","Data":"5719e287e19d1aada1feb99ada60f33ea5c12f2915d4d7dc5daf45de50895285"} Jan 28 17:14:41 crc kubenswrapper[4903]: I0128 17:14:41.910163 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29","Type":"ContainerStarted","Data":"d95488f85b7d8d80bdbf49d4e3be5e381c094e175359379bfe4eef66c1834671"} Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.223401 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.242557 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr95z\" (UniqueName: \"kubernetes.io/projected/4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29-kube-api-access-pr95z\") pod \"4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29\" (UID: \"4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29\") " Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.248770 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29/mariadb-client/0.log" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.250832 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29-kube-api-access-pr95z" (OuterVolumeSpecName: "kube-api-access-pr95z") pod "4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29" (UID: "4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29"). InnerVolumeSpecName "kube-api-access-pr95z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.281305 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.287124 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.344787 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr95z\" (UniqueName: \"kubernetes.io/projected/4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29-kube-api-access-pr95z\") on node \"crc\" DevicePath \"\"" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.427835 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:43 crc kubenswrapper[4903]: E0128 17:14:43.428193 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29" containerName="mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.428205 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29" containerName="mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.428348 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29" containerName="mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.428860 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.438217 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.547225 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfftm\" (UniqueName: \"kubernetes.io/projected/b8db17fe-8002-43fe-a9d7-3ac0aefc9198-kube-api-access-dfftm\") pod \"mariadb-client\" (UID: \"b8db17fe-8002-43fe-a9d7-3ac0aefc9198\") " pod="openstack/mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.648707 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfftm\" (UniqueName: \"kubernetes.io/projected/b8db17fe-8002-43fe-a9d7-3ac0aefc9198-kube-api-access-dfftm\") pod \"mariadb-client\" (UID: \"b8db17fe-8002-43fe-a9d7-3ac0aefc9198\") " pod="openstack/mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.665943 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfftm\" (UniqueName: \"kubernetes.io/projected/b8db17fe-8002-43fe-a9d7-3ac0aefc9198-kube-api-access-dfftm\") pod \"mariadb-client\" (UID: \"b8db17fe-8002-43fe-a9d7-3ac0aefc9198\") " pod="openstack/mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.746390 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.928584 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d95488f85b7d8d80bdbf49d4e3be5e381c094e175359379bfe4eef66c1834671" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.928639 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.945874 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29" podUID="b8db17fe-8002-43fe-a9d7-3ac0aefc9198" Jan 28 17:14:43 crc kubenswrapper[4903]: I0128 17:14:43.977947 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:43 crc kubenswrapper[4903]: W0128 17:14:43.979232 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8db17fe_8002_43fe_a9d7_3ac0aefc9198.slice/crio-dcdcc047a9617a02fb977123dbaab35fb51e20cf00cdd795ff8017a6c968d1d6 WatchSource:0}: Error finding container dcdcc047a9617a02fb977123dbaab35fb51e20cf00cdd795ff8017a6c968d1d6: Status 404 returned error can't find the container with id dcdcc047a9617a02fb977123dbaab35fb51e20cf00cdd795ff8017a6c968d1d6 Jan 28 17:14:44 crc kubenswrapper[4903]: I0128 17:14:44.429723 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29" path="/var/lib/kubelet/pods/4f2a1b42-820f-45de-8cb9-1e7c5a0e3b29/volumes" Jan 28 17:14:44 crc kubenswrapper[4903]: I0128 17:14:44.937216 4903 generic.go:334] "Generic (PLEG): container finished" podID="b8db17fe-8002-43fe-a9d7-3ac0aefc9198" containerID="e69ea0f73a75d995f5e8ff80df1193326abf465c3469e67feff9aaaa19201d6b" exitCode=0 Jan 28 17:14:44 crc kubenswrapper[4903]: I0128 17:14:44.937433 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b8db17fe-8002-43fe-a9d7-3ac0aefc9198","Type":"ContainerDied","Data":"e69ea0f73a75d995f5e8ff80df1193326abf465c3469e67feff9aaaa19201d6b"} Jan 28 17:14:44 crc kubenswrapper[4903]: I0128 17:14:44.937668 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b8db17fe-8002-43fe-a9d7-3ac0aefc9198","Type":"ContainerStarted","Data":"dcdcc047a9617a02fb977123dbaab35fb51e20cf00cdd795ff8017a6c968d1d6"} Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.279993 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.304081 4903 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_b8db17fe-8002-43fe-a9d7-3ac0aefc9198/mariadb-client/0.log" Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.333778 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.341196 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.394018 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfftm\" (UniqueName: \"kubernetes.io/projected/b8db17fe-8002-43fe-a9d7-3ac0aefc9198-kube-api-access-dfftm\") pod \"b8db17fe-8002-43fe-a9d7-3ac0aefc9198\" (UID: \"b8db17fe-8002-43fe-a9d7-3ac0aefc9198\") " Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.400878 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8db17fe-8002-43fe-a9d7-3ac0aefc9198-kube-api-access-dfftm" (OuterVolumeSpecName: "kube-api-access-dfftm") pod "b8db17fe-8002-43fe-a9d7-3ac0aefc9198" (UID: "b8db17fe-8002-43fe-a9d7-3ac0aefc9198"). InnerVolumeSpecName "kube-api-access-dfftm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.424470 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8db17fe-8002-43fe-a9d7-3ac0aefc9198" path="/var/lib/kubelet/pods/b8db17fe-8002-43fe-a9d7-3ac0aefc9198/volumes" Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.496546 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfftm\" (UniqueName: \"kubernetes.io/projected/b8db17fe-8002-43fe-a9d7-3ac0aefc9198-kube-api-access-dfftm\") on node \"crc\" DevicePath \"\"" Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.955345 4903 scope.go:117] "RemoveContainer" containerID="e69ea0f73a75d995f5e8ff80df1193326abf465c3469e67feff9aaaa19201d6b" Jan 28 17:14:46 crc kubenswrapper[4903]: I0128 17:14:46.955431 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.155783 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq"] Jan 28 17:15:00 crc kubenswrapper[4903]: E0128 17:15:00.156818 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8db17fe-8002-43fe-a9d7-3ac0aefc9198" containerName="mariadb-client" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.156838 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8db17fe-8002-43fe-a9d7-3ac0aefc9198" containerName="mariadb-client" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.156990 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8db17fe-8002-43fe-a9d7-3ac0aefc9198" containerName="mariadb-client" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.157633 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.160345 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.160512 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.169436 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq"] Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.303722 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-config-volume\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.303841 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-secret-volume\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.304029 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdplh\" (UniqueName: \"kubernetes.io/projected/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-kube-api-access-wdplh\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.405495 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdplh\" (UniqueName: \"kubernetes.io/projected/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-kube-api-access-wdplh\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.405653 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-config-volume\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.405705 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-secret-volume\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.406863 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-config-volume\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.420775 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-secret-volume\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.450022 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdplh\" (UniqueName: \"kubernetes.io/projected/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-kube-api-access-wdplh\") pod \"collect-profiles-29493675-vd2rq\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:00 crc kubenswrapper[4903]: I0128 17:15:00.493253 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:01 crc kubenswrapper[4903]: I0128 17:15:01.031335 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq"] Jan 28 17:15:01 crc kubenswrapper[4903]: I0128 17:15:01.064471 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" event={"ID":"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b","Type":"ContainerStarted","Data":"3010cc70d9c7e7494f9d0b3e3d6a7d8833f8efabe640e8fd88a96e9a65655cad"} Jan 28 17:15:02 crc kubenswrapper[4903]: I0128 17:15:02.073265 4903 generic.go:334] "Generic (PLEG): container finished" podID="fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" containerID="eadd54a8ec7affa286c963983a051d7e9780c6557f0a4b11cec81cb658c8e97a" exitCode=0 Jan 28 17:15:02 crc kubenswrapper[4903]: I0128 17:15:02.073651 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" event={"ID":"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b","Type":"ContainerDied","Data":"eadd54a8ec7affa286c963983a051d7e9780c6557f0a4b11cec81cb658c8e97a"} Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.371293 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.548764 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-config-volume\") pod \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.548810 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdplh\" (UniqueName: \"kubernetes.io/projected/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-kube-api-access-wdplh\") pod \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.548844 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-secret-volume\") pod \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\" (UID: \"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b\") " Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.549792 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" (UID: "fc39a5ce-e947-4b9d-9d49-dc984dcdb46b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.555034 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-kube-api-access-wdplh" (OuterVolumeSpecName: "kube-api-access-wdplh") pod "fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" (UID: "fc39a5ce-e947-4b9d-9d49-dc984dcdb46b"). InnerVolumeSpecName "kube-api-access-wdplh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.555099 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" (UID: "fc39a5ce-e947-4b9d-9d49-dc984dcdb46b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.651207 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.651254 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdplh\" (UniqueName: \"kubernetes.io/projected/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-kube-api-access-wdplh\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:03 crc kubenswrapper[4903]: I0128 17:15:03.651267 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:04 crc kubenswrapper[4903]: I0128 17:15:04.090779 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" event={"ID":"fc39a5ce-e947-4b9d-9d49-dc984dcdb46b","Type":"ContainerDied","Data":"3010cc70d9c7e7494f9d0b3e3d6a7d8833f8efabe640e8fd88a96e9a65655cad"} Jan 28 17:15:04 crc kubenswrapper[4903]: I0128 17:15:04.091083 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3010cc70d9c7e7494f9d0b3e3d6a7d8833f8efabe640e8fd88a96e9a65655cad" Jan 28 17:15:04 crc kubenswrapper[4903]: I0128 17:15:04.090859 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq" Jan 28 17:15:04 crc kubenswrapper[4903]: I0128 17:15:04.451370 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms"] Jan 28 17:15:04 crc kubenswrapper[4903]: I0128 17:15:04.458919 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493630-j5kms"] Jan 28 17:15:06 crc kubenswrapper[4903]: I0128 17:15:06.421564 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f46c9d2-c258-49d5-84b0-61e5dd23d5af" path="/var/lib/kubelet/pods/5f46c9d2-c258-49d5-84b0-61e5dd23d5af/volumes" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.191763 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 17:15:27 crc kubenswrapper[4903]: E0128 17:15:27.192695 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" containerName="collect-profiles" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.192709 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" containerName="collect-profiles" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.192898 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" containerName="collect-profiles" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.193698 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.195762 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.196010 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.196192 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qctl2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.196371 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.197123 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.200732 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.209421 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.212099 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.221080 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.229977 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.241861 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.251206 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.327563 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgrjd\" (UniqueName: \"kubernetes.io/projected/d70023d3-04b9-47b3-a3af-e57b30b963a1-kube-api-access-qgrjd\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.327617 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9c3549d-353f-4a2c-972a-12a8fb29042b-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.327646 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.327671 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c5a589d-6398-41fe-bf5f-396effafd474-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.327706 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.327744 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65z46\" (UniqueName: \"kubernetes.io/projected/9c5a589d-6398-41fe-bf5f-396effafd474-kube-api-access-65z46\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.327817 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328004 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5a589d-6398-41fe-bf5f-396effafd474-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328066 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d70023d3-04b9-47b3-a3af-e57b30b963a1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328141 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328171 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328208 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328266 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328298 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5a589d-6398-41fe-bf5f-396effafd474-config\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328314 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9c3549d-353f-4a2c-972a-12a8fb29042b-config\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328340 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328367 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9c3549d-353f-4a2c-972a-12a8fb29042b-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328384 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70023d3-04b9-47b3-a3af-e57b30b963a1-config\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328408 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328511 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hd6\" (UniqueName: \"kubernetes.io/projected/d9c3549d-353f-4a2c-972a-12a8fb29042b-kube-api-access-77hd6\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328656 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d70023d3-04b9-47b3-a3af-e57b30b963a1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328686 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328714 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.328741 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.430907 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9c3549d-353f-4a2c-972a-12a8fb29042b-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.430965 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70023d3-04b9-47b3-a3af-e57b30b963a1-config\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431001 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431044 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77hd6\" (UniqueName: \"kubernetes.io/projected/d9c3549d-353f-4a2c-972a-12a8fb29042b-kube-api-access-77hd6\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431073 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d70023d3-04b9-47b3-a3af-e57b30b963a1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431100 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431127 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431195 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431246 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgrjd\" (UniqueName: \"kubernetes.io/projected/d70023d3-04b9-47b3-a3af-e57b30b963a1-kube-api-access-qgrjd\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431267 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9c3549d-353f-4a2c-972a-12a8fb29042b-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431293 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431332 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c5a589d-6398-41fe-bf5f-396effafd474-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431366 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431401 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65z46\" (UniqueName: \"kubernetes.io/projected/9c5a589d-6398-41fe-bf5f-396effafd474-kube-api-access-65z46\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431425 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431453 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5a589d-6398-41fe-bf5f-396effafd474-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431474 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d70023d3-04b9-47b3-a3af-e57b30b963a1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431497 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431519 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431568 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431601 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431630 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5a589d-6398-41fe-bf5f-396effafd474-config\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431650 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9c3549d-353f-4a2c-972a-12a8fb29042b-config\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.431675 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.432052 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d70023d3-04b9-47b3-a3af-e57b30b963a1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.432511 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c5a589d-6398-41fe-bf5f-396effafd474-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.432779 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d9c3549d-353f-4a2c-972a-12a8fb29042b-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.433651 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d70023d3-04b9-47b3-a3af-e57b30b963a1-config\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.433760 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c5a589d-6398-41fe-bf5f-396effafd474-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.434198 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9c3549d-353f-4a2c-972a-12a8fb29042b-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.434375 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9c3549d-353f-4a2c-972a-12a8fb29042b-config\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.435063 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c5a589d-6398-41fe-bf5f-396effafd474-config\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.435494 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d70023d3-04b9-47b3-a3af-e57b30b963a1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.438948 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.438991 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.439014 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/601ad57549f55fff46692b7431628fb23a5deed50dc9af9107ec6bcfdf5a687b/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.440377 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.440732 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.440765 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/198159b431ce83aa25e20d79ba5e55ebd0c23b7445ffcdab40c61cfdb81c8a6e/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.441505 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.441584 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c042796613f7b5e961e2290fd2886bc6535753188334ac0717735cee1d376cd6/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.442634 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.445964 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70023d3-04b9-47b3-a3af-e57b30b963a1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.448310 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.450182 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.450896 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77hd6\" (UniqueName: \"kubernetes.io/projected/d9c3549d-353f-4a2c-972a-12a8fb29042b-kube-api-access-77hd6\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.451486 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.454631 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9c3549d-353f-4a2c-972a-12a8fb29042b-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.457053 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5a589d-6398-41fe-bf5f-396effafd474-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.457824 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65z46\" (UniqueName: \"kubernetes.io/projected/9c5a589d-6398-41fe-bf5f-396effafd474-kube-api-access-65z46\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.459657 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgrjd\" (UniqueName: \"kubernetes.io/projected/d70023d3-04b9-47b3-a3af-e57b30b963a1-kube-api-access-qgrjd\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.477393 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ad4feff4-69f9-49ef-b17f-60c5083f9b1d\") pod \"ovsdbserver-nb-1\" (UID: \"9c5a589d-6398-41fe-bf5f-396effafd474\") " pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.480178 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec4b8334-4f6d-4315-87f4-a0247b021303\") pod \"ovsdbserver-nb-2\" (UID: \"d9c3549d-353f-4a2c-972a-12a8fb29042b\") " pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.483262 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0bd6f2d5-8e63-4ec3-bb54-8c7f9401ea66\") pod \"ovsdbserver-nb-0\" (UID: \"d70023d3-04b9-47b3-a3af-e57b30b963a1\") " pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.513923 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.539699 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:27 crc kubenswrapper[4903]: I0128 17:15:27.553465 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:28 crc kubenswrapper[4903]: I0128 17:15:28.083686 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 17:15:28 crc kubenswrapper[4903]: I0128 17:15:28.200184 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 28 17:15:28 crc kubenswrapper[4903]: I0128 17:15:28.302114 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d70023d3-04b9-47b3-a3af-e57b30b963a1","Type":"ContainerStarted","Data":"61c7b30c4d8d742ab6a4ac90564d84730456e1926a5a022654fbd7e413b5f9b6"} Jan 28 17:15:28 crc kubenswrapper[4903]: I0128 17:15:28.302159 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d70023d3-04b9-47b3-a3af-e57b30b963a1","Type":"ContainerStarted","Data":"975c5d04c45f66f835ba712e612265411f4492ff2f442c3e3e067d2c4f3267f5"} Jan 28 17:15:28 crc kubenswrapper[4903]: I0128 17:15:28.303230 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"9c5a589d-6398-41fe-bf5f-396effafd474","Type":"ContainerStarted","Data":"d66d68f0e5dbb1005feb347e1ed9982eb83b55d9b782b2d124ab7eb7a8f86ba7"} Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.203106 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.322729 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"9c5a589d-6398-41fe-bf5f-396effafd474","Type":"ContainerStarted","Data":"19767665ffed48f61cca94137b2c775786116daac4630818dc14444b2942b2b8"} Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.323004 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"9c5a589d-6398-41fe-bf5f-396effafd474","Type":"ContainerStarted","Data":"560031e635c5de74b685de7206489962cd6a5a2ecb31e53ed85e5e36e4794038"} Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.324514 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"d9c3549d-353f-4a2c-972a-12a8fb29042b","Type":"ContainerStarted","Data":"c0a8feed3d4456ab6bd84d782510a9c9ffb7dac3b78d169346d67f6627084e42"} Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.327455 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d70023d3-04b9-47b3-a3af-e57b30b963a1","Type":"ContainerStarted","Data":"5b8aaabce7d5775fb7ef369c4c7d5237dfbd42433c86a0ea9a81e5a749b138c3"} Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.350415 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.350393763 podStartE2EDuration="3.350393763s" podCreationTimestamp="2026-01-28 17:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:29.345965922 +0000 UTC m=+5401.621937453" watchObservedRunningTime="2026-01-28 17:15:29.350393763 +0000 UTC m=+5401.626365274" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.368079 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.368059591 podStartE2EDuration="3.368059591s" podCreationTimestamp="2026-01-28 17:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:29.363724434 +0000 UTC m=+5401.639695945" watchObservedRunningTime="2026-01-28 17:15:29.368059591 +0000 UTC m=+5401.644031102" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.430165 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.431400 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.433371 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.433649 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.433774 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.434144 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-nmhjj" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.451346 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.479757 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.480981 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.487011 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.488319 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.493224 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.506231 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578584 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578620 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578659 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80ae9cb7-477b-4029-b5bb-416a0b271e6e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578678 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578704 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6170af60-7e18-4b39-a990-669c9794faa2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6170af60-7e18-4b39-a990-669c9794faa2\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578722 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578756 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578773 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-config\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578789 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578808 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e02efa34-4095-4db8-993d-313962643b73-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578830 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wggjz\" (UniqueName: \"kubernetes.io/projected/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-kube-api-access-wggjz\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578859 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80ae9cb7-477b-4029-b5bb-416a0b271e6e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578885 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2bz\" (UniqueName: \"kubernetes.io/projected/e02efa34-4095-4db8-993d-313962643b73-kube-api-access-4m2bz\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578912 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e02efa34-4095-4db8-993d-313962643b73-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578958 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.578993 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579018 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ae9cb7-477b-4029-b5bb-416a0b271e6e-config\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579040 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579056 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7424\" (UniqueName: \"kubernetes.io/projected/80ae9cb7-477b-4029-b5bb-416a0b271e6e-kube-api-access-b7424\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579072 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579091 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579108 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579125 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.579144 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02efa34-4095-4db8-993d-313962643b73-config\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.680910 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.680964 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681019 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80ae9cb7-477b-4029-b5bb-416a0b271e6e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681039 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681069 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6170af60-7e18-4b39-a990-669c9794faa2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6170af60-7e18-4b39-a990-669c9794faa2\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681105 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681127 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681146 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-config\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681178 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681200 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e02efa34-4095-4db8-993d-313962643b73-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681218 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wggjz\" (UniqueName: \"kubernetes.io/projected/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-kube-api-access-wggjz\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681254 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80ae9cb7-477b-4029-b5bb-416a0b271e6e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681276 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m2bz\" (UniqueName: \"kubernetes.io/projected/e02efa34-4095-4db8-993d-313962643b73-kube-api-access-4m2bz\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681297 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e02efa34-4095-4db8-993d-313962643b73-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681347 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681380 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681417 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ae9cb7-477b-4029-b5bb-416a0b271e6e-config\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681438 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681453 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7424\" (UniqueName: \"kubernetes.io/projected/80ae9cb7-477b-4029-b5bb-416a0b271e6e-kube-api-access-b7424\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681484 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681501 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681599 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80ae9cb7-477b-4029-b5bb-416a0b271e6e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681521 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681674 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.681717 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02efa34-4095-4db8-993d-313962643b73-config\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.682466 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.683836 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e02efa34-4095-4db8-993d-313962643b73-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.684741 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.684769 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/602b1a3f26f0815dd8f9c3e084528e1eb0cb64c581191d6cedbdba94191cf06f/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.684857 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.684877 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6170af60-7e18-4b39-a990-669c9794faa2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6170af60-7e18-4b39-a990-669c9794faa2\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e2c35ec6d84dbc089f2d14ee3d3c527212c826a5518b35f238d792e56cd15f7c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.685639 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.685736 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7542e3a2cf8d5c890d851c7c1cc348d8788a2774e02beef0e515d2deac6d0d57/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.686725 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80ae9cb7-477b-4029-b5bb-416a0b271e6e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.687075 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e02efa34-4095-4db8-993d-313962643b73-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.687296 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.687535 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.687622 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e02efa34-4095-4db8-993d-313962643b73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.687821 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ae9cb7-477b-4029-b5bb-416a0b271e6e-config\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.689202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.691202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02efa34-4095-4db8-993d-313962643b73-config\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.691904 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-config\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.692685 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.692817 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.699642 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.700477 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.705002 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7424\" (UniqueName: \"kubernetes.io/projected/80ae9cb7-477b-4029-b5bb-416a0b271e6e-kube-api-access-b7424\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.708319 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80ae9cb7-477b-4029-b5bb-416a0b271e6e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.712575 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.713284 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m2bz\" (UniqueName: \"kubernetes.io/projected/e02efa34-4095-4db8-993d-313962643b73-kube-api-access-4m2bz\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.717338 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wggjz\" (UniqueName: \"kubernetes.io/projected/70d165a8-d1b2-4ea2-81e5-aac4c8db0c74-kube-api-access-wggjz\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.736523 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6bdcae8c-6e85-43e5-855a-7882f0348b5a\") pod \"ovsdbserver-sb-1\" (UID: \"e02efa34-4095-4db8-993d-313962643b73\") " pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.737347 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1bbe263-1bdb-4b64-8e4c-3759dea32a8a\") pod \"ovsdbserver-sb-2\" (UID: \"80ae9cb7-477b-4029-b5bb-416a0b271e6e\") " pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.749547 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6170af60-7e18-4b39-a990-669c9794faa2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6170af60-7e18-4b39-a990-669c9794faa2\") pod \"ovsdbserver-sb-0\" (UID: \"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74\") " pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.768296 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.800905 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:29 crc kubenswrapper[4903]: I0128 17:15:29.814780 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.291957 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 17:15:30 crc kubenswrapper[4903]: W0128 17:15:30.293419 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d165a8_d1b2_4ea2_81e5_aac4c8db0c74.slice/crio-99423c59c1b8071355abdc6f7c3c471bcedfccbbf518e1ca7734ab9e27cc197b WatchSource:0}: Error finding container 99423c59c1b8071355abdc6f7c3c471bcedfccbbf518e1ca7734ab9e27cc197b: Status 404 returned error can't find the container with id 99423c59c1b8071355abdc6f7c3c471bcedfccbbf518e1ca7734ab9e27cc197b Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.338459 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"d9c3549d-353f-4a2c-972a-12a8fb29042b","Type":"ContainerStarted","Data":"2865667858f6e3ad02f1d2b4ac76b08c2f6a4d8c4e5cb66931cc6aa98e29e2ab"} Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.338511 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"d9c3549d-353f-4a2c-972a-12a8fb29042b","Type":"ContainerStarted","Data":"69e64411e79b048f340d8bb4427c8096e6fc49e0d24625fbc190431c832dcf53"} Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.340495 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74","Type":"ContainerStarted","Data":"99423c59c1b8071355abdc6f7c3c471bcedfccbbf518e1ca7734ab9e27cc197b"} Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.363012 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.36298485 podStartE2EDuration="4.36298485s" podCreationTimestamp="2026-01-28 17:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:30.358029875 +0000 UTC m=+5402.634001386" watchObservedRunningTime="2026-01-28 17:15:30.36298485 +0000 UTC m=+5402.638956361" Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.400628 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.514299 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.540312 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:30 crc kubenswrapper[4903]: I0128 17:15:30.554257 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.011011 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.358273 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"e02efa34-4095-4db8-993d-313962643b73","Type":"ContainerStarted","Data":"bd5ff375e29e7436b409bbcaa7e79c4c729630f6a53ee66b87d41da188de53db"} Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.358657 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"e02efa34-4095-4db8-993d-313962643b73","Type":"ContainerStarted","Data":"256423b2c28dc773ff7a830f81e6b15a5a48691264be664704b884bda1119792"} Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.361557 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74","Type":"ContainerStarted","Data":"2583be4ed3d936664a5b57f6c1357af1b4df2893b6dd72d83ceec9c0b7d267e9"} Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.361595 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"70d165a8-d1b2-4ea2-81e5-aac4c8db0c74","Type":"ContainerStarted","Data":"544c4805fc610839d67e61add10ba737d90ab369f3e218142dd5e47085975003"} Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.363968 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"80ae9cb7-477b-4029-b5bb-416a0b271e6e","Type":"ContainerStarted","Data":"5ff6046994cbec781c0a2af597f35c09a0d9affa52664b50d2c7961942bdedd7"} Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.364028 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"80ae9cb7-477b-4029-b5bb-416a0b271e6e","Type":"ContainerStarted","Data":"c9be25ddd9fa888da73aeb089a26f1c4ef268b23a660f8c7a4c3541867bf1b72"} Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.364046 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"80ae9cb7-477b-4029-b5bb-416a0b271e6e","Type":"ContainerStarted","Data":"076f1ef024eba0d4b062f7d233432439d64459e3d7270a64c5caf8e2dc868740"} Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.388902 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.388850756 podStartE2EDuration="3.388850756s" podCreationTimestamp="2026-01-28 17:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:31.380362106 +0000 UTC m=+5403.656333617" watchObservedRunningTime="2026-01-28 17:15:31.388850756 +0000 UTC m=+5403.664822267" Jan 28 17:15:31 crc kubenswrapper[4903]: I0128 17:15:31.400063 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.40004756 podStartE2EDuration="3.40004756s" podCreationTimestamp="2026-01-28 17:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:31.39892545 +0000 UTC m=+5403.674896961" watchObservedRunningTime="2026-01-28 17:15:31.40004756 +0000 UTC m=+5403.676019071" Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.377405 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"e02efa34-4095-4db8-993d-313962643b73","Type":"ContainerStarted","Data":"9ff56ac1f595f0d3049e51a72dc3ef5ba6fddcb4f3b6b862bc1ef87e625ab504"} Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.411000 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.410970732 podStartE2EDuration="4.410970732s" podCreationTimestamp="2026-01-28 17:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:32.396231962 +0000 UTC m=+5404.672203503" watchObservedRunningTime="2026-01-28 17:15:32.410970732 +0000 UTC m=+5404.686942283" Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.514260 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.539932 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.553607 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.769197 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.801602 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:32 crc kubenswrapper[4903]: I0128 17:15:32.815859 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.558068 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.582550 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.597934 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.601637 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.644079 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.799262 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-pmwwv"] Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.800596 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.802362 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.815556 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-pmwwv"] Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.860755 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rpl\" (UniqueName: \"kubernetes.io/projected/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-kube-api-access-z7rpl\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.860828 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.860870 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-config\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.860961 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-dns-svc\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.962796 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7rpl\" (UniqueName: \"kubernetes.io/projected/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-kube-api-access-z7rpl\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.962862 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.962891 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-config\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.962977 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-dns-svc\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.963977 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-dns-svc\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.963997 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-config\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.964046 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-ovsdbserver-nb\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:33 crc kubenswrapper[4903]: I0128 17:15:33.985793 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7rpl\" (UniqueName: \"kubernetes.io/projected/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-kube-api-access-z7rpl\") pod \"dnsmasq-dns-8574559fdf-pmwwv\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:34 crc kubenswrapper[4903]: I0128 17:15:34.122265 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:34 crc kubenswrapper[4903]: I0128 17:15:34.436501 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 28 17:15:34 crc kubenswrapper[4903]: I0128 17:15:34.540618 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-pmwwv"] Jan 28 17:15:34 crc kubenswrapper[4903]: I0128 17:15:34.769368 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:34 crc kubenswrapper[4903]: I0128 17:15:34.801505 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:34 crc kubenswrapper[4903]: I0128 17:15:34.815729 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:35 crc kubenswrapper[4903]: I0128 17:15:35.402249 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" event={"ID":"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9","Type":"ContainerStarted","Data":"c1b6e57f33124ee8eee824afc5c2d4194c32d50b7b091a4aa254f6a8425b3f08"} Jan 28 17:15:35 crc kubenswrapper[4903]: I0128 17:15:35.812929 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:35 crc kubenswrapper[4903]: I0128 17:15:35.841827 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:35 crc kubenswrapper[4903]: I0128 17:15:35.858275 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:35 crc kubenswrapper[4903]: I0128 17:15:35.864806 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 17:15:35 crc kubenswrapper[4903]: I0128 17:15:35.887205 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.073780 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-pmwwv"] Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.097629 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6998c99fcf-lzx7g"] Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.126826 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.136197 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.153820 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6998c99fcf-lzx7g"] Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.231184 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-config\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.231365 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-dns-svc\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.231453 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-sb\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.231499 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5dv8\" (UniqueName: \"kubernetes.io/projected/68b6e5f8-e4da-4d0b-a062-953348527ac6-kube-api-access-h5dv8\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.231569 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-nb\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.332826 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-dns-svc\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.332903 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-sb\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.332932 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5dv8\" (UniqueName: \"kubernetes.io/projected/68b6e5f8-e4da-4d0b-a062-953348527ac6-kube-api-access-h5dv8\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.332970 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-nb\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.333065 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-config\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.333807 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-sb\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.333828 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-dns-svc\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.333986 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-config\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.334069 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-nb\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.351057 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5dv8\" (UniqueName: \"kubernetes.io/projected/68b6e5f8-e4da-4d0b-a062-953348527ac6-kube-api-access-h5dv8\") pod \"dnsmasq-dns-6998c99fcf-lzx7g\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.447855 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.453608 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:36 crc kubenswrapper[4903]: I0128 17:15:36.936369 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6998c99fcf-lzx7g"] Jan 28 17:15:37 crc kubenswrapper[4903]: I0128 17:15:37.416247 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" event={"ID":"68b6e5f8-e4da-4d0b-a062-953348527ac6","Type":"ContainerStarted","Data":"b9bbefdf90921b4e4b1c90d7b7c2fa665559d9bf8c21989f44c64842ef8a66e6"} Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.426289 4903 generic.go:334] "Generic (PLEG): container finished" podID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerID="090676cfe480499ffeecdf09b5a8d71dc7cb59cd1f1766425eb51f5826208e8c" exitCode=0 Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.426353 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" event={"ID":"68b6e5f8-e4da-4d0b-a062-953348527ac6","Type":"ContainerDied","Data":"090676cfe480499ffeecdf09b5a8d71dc7cb59cd1f1766425eb51f5826208e8c"} Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.432074 4903 generic.go:334] "Generic (PLEG): container finished" podID="d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" containerID="3d2a8ec7397886f3851ef9180671bbd4317b1ca614eeb86a436f82cab7e1c450" exitCode=0 Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.432185 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" event={"ID":"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9","Type":"ContainerDied","Data":"3d2a8ec7397886f3851ef9180671bbd4317b1ca614eeb86a436f82cab7e1c450"} Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.779317 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.890095 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-ovsdbserver-nb\") pod \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.890217 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7rpl\" (UniqueName: \"kubernetes.io/projected/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-kube-api-access-z7rpl\") pod \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.890266 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-config\") pod \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.890311 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-dns-svc\") pod \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\" (UID: \"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9\") " Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.895937 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-kube-api-access-z7rpl" (OuterVolumeSpecName: "kube-api-access-z7rpl") pod "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" (UID: "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9"). InnerVolumeSpecName "kube-api-access-z7rpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.910077 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-config" (OuterVolumeSpecName: "config") pod "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" (UID: "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.910802 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" (UID: "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.911333 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" (UID: "d84b5cd2-1fbd-4beb-8088-7f9ae15faea9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.923802 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 28 17:15:38 crc kubenswrapper[4903]: E0128 17:15:38.924140 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" containerName="init" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.924152 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" containerName="init" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.924311 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" containerName="init" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.924940 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.932097 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.935279 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.993021 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.993078 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.993092 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7rpl\" (UniqueName: \"kubernetes.io/projected/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-kube-api-access-z7rpl\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:38 crc kubenswrapper[4903]: I0128 17:15:38.993101 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.095575 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvw7b\" (UniqueName: \"kubernetes.io/projected/359838d4-0ee8-4ec0-a6d9-238346bda738-kube-api-access-vvw7b\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.096324 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.096479 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/359838d4-0ee8-4ec0-a6d9-238346bda738-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.198402 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/359838d4-0ee8-4ec0-a6d9-238346bda738-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.198463 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvw7b\" (UniqueName: \"kubernetes.io/projected/359838d4-0ee8-4ec0-a6d9-238346bda738-kube-api-access-vvw7b\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.198497 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.202304 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.202347 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/617f8f233a52a9c3f7f847d8c11b07240422acc313bfad126ac6fbc815860ac2/globalmount\"" pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.202575 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/359838d4-0ee8-4ec0-a6d9-238346bda738-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.215263 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvw7b\" (UniqueName: \"kubernetes.io/projected/359838d4-0ee8-4ec0-a6d9-238346bda738-kube-api-access-vvw7b\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.231303 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c739ee48-6594-4da3-8d23-86690f6f35aa\") pod \"ovn-copy-data\" (UID: \"359838d4-0ee8-4ec0-a6d9-238346bda738\") " pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.255891 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.450766 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.451630 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8574559fdf-pmwwv" event={"ID":"d84b5cd2-1fbd-4beb-8088-7f9ae15faea9","Type":"ContainerDied","Data":"c1b6e57f33124ee8eee824afc5c2d4194c32d50b7b091a4aa254f6a8425b3f08"} Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.451730 4903 scope.go:117] "RemoveContainer" containerID="3d2a8ec7397886f3851ef9180671bbd4317b1ca614eeb86a436f82cab7e1c450" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.455394 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" event={"ID":"68b6e5f8-e4da-4d0b-a062-953348527ac6","Type":"ContainerStarted","Data":"ac7adb019d5a19fee0a814ce5780886268d62bceacd8f0c2daaa9a6f1d868dea"} Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.457157 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.571784 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" podStartSLOduration=3.571761002 podStartE2EDuration="3.571761002s" podCreationTimestamp="2026-01-28 17:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:39.475241524 +0000 UTC m=+5411.751213035" watchObservedRunningTime="2026-01-28 17:15:39.571761002 +0000 UTC m=+5411.847732523" Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.590088 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-pmwwv"] Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.598323 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8574559fdf-pmwwv"] Jan 28 17:15:39 crc kubenswrapper[4903]: I0128 17:15:39.781945 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 28 17:15:39 crc kubenswrapper[4903]: W0128 17:15:39.790162 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod359838d4_0ee8_4ec0_a6d9_238346bda738.slice/crio-99ad6c548e49b34586cfaafbe847f5b3a3c02955f37d523ee35df01cd96e4fcb WatchSource:0}: Error finding container 99ad6c548e49b34586cfaafbe847f5b3a3c02955f37d523ee35df01cd96e4fcb: Status 404 returned error can't find the container with id 99ad6c548e49b34586cfaafbe847f5b3a3c02955f37d523ee35df01cd96e4fcb Jan 28 17:15:40 crc kubenswrapper[4903]: I0128 17:15:40.422627 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84b5cd2-1fbd-4beb-8088-7f9ae15faea9" path="/var/lib/kubelet/pods/d84b5cd2-1fbd-4beb-8088-7f9ae15faea9/volumes" Jan 28 17:15:40 crc kubenswrapper[4903]: I0128 17:15:40.470132 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"359838d4-0ee8-4ec0-a6d9-238346bda738","Type":"ContainerStarted","Data":"99ad6c548e49b34586cfaafbe847f5b3a3c02955f37d523ee35df01cd96e4fcb"} Jan 28 17:15:41 crc kubenswrapper[4903]: I0128 17:15:41.479504 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"359838d4-0ee8-4ec0-a6d9-238346bda738","Type":"ContainerStarted","Data":"044d4205fb18091bb9364e97ab60bf6658a668e9bfc8313df6684584aacfad8b"} Jan 28 17:15:41 crc kubenswrapper[4903]: I0128 17:15:41.502013 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.7522558029999997 podStartE2EDuration="4.50198969s" podCreationTimestamp="2026-01-28 17:15:37 +0000 UTC" firstStartedPulling="2026-01-28 17:15:39.793318252 +0000 UTC m=+5412.069289763" lastFinishedPulling="2026-01-28 17:15:40.543052139 +0000 UTC m=+5412.819023650" observedRunningTime="2026-01-28 17:15:41.494244801 +0000 UTC m=+5413.770216322" watchObservedRunningTime="2026-01-28 17:15:41.50198969 +0000 UTC m=+5413.777961211" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.139328 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kh2cv"] Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.142848 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.150479 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kh2cv"] Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.270514 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-utilities\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.270707 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2npc\" (UniqueName: \"kubernetes.io/projected/7cc41660-aad2-4b4e-8594-7ad053989bb0-kube-api-access-l2npc\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.270800 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-catalog-content\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.372129 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2npc\" (UniqueName: \"kubernetes.io/projected/7cc41660-aad2-4b4e-8594-7ad053989bb0-kube-api-access-l2npc\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.372214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-catalog-content\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.372302 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-utilities\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.372824 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-utilities\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.373109 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-catalog-content\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.392658 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2npc\" (UniqueName: \"kubernetes.io/projected/7cc41660-aad2-4b4e-8594-7ad053989bb0-kube-api-access-l2npc\") pod \"community-operators-kh2cv\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.470860 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:43 crc kubenswrapper[4903]: I0128 17:15:43.967920 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kh2cv"] Jan 28 17:15:43 crc kubenswrapper[4903]: W0128 17:15:43.968011 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc41660_aad2_4b4e_8594_7ad053989bb0.slice/crio-81897ae28af3a8421432a6893a7f19f079a3873625dbb73b67225c6c5a1c74cf WatchSource:0}: Error finding container 81897ae28af3a8421432a6893a7f19f079a3873625dbb73b67225c6c5a1c74cf: Status 404 returned error can't find the container with id 81897ae28af3a8421432a6893a7f19f079a3873625dbb73b67225c6c5a1c74cf Jan 28 17:15:44 crc kubenswrapper[4903]: I0128 17:15:44.502109 4903 generic.go:334] "Generic (PLEG): container finished" podID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerID="67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8" exitCode=0 Jan 28 17:15:44 crc kubenswrapper[4903]: I0128 17:15:44.502266 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2cv" event={"ID":"7cc41660-aad2-4b4e-8594-7ad053989bb0","Type":"ContainerDied","Data":"67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8"} Jan 28 17:15:44 crc kubenswrapper[4903]: I0128 17:15:44.502494 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2cv" event={"ID":"7cc41660-aad2-4b4e-8594-7ad053989bb0","Type":"ContainerStarted","Data":"81897ae28af3a8421432a6893a7f19f079a3873625dbb73b67225c6c5a1c74cf"} Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.554125 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kqlxx"] Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.566637 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.567641 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kqlxx"] Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.711297 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-catalog-content\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.711721 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-utilities\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.711753 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26v6c\" (UniqueName: \"kubernetes.io/projected/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-kube-api-access-26v6c\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.812988 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-catalog-content\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.813098 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-utilities\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.813120 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26v6c\" (UniqueName: \"kubernetes.io/projected/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-kube-api-access-26v6c\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.813507 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-catalog-content\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.813684 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-utilities\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.845562 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26v6c\" (UniqueName: \"kubernetes.io/projected/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-kube-api-access-26v6c\") pod \"certified-operators-kqlxx\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:45 crc kubenswrapper[4903]: I0128 17:15:45.899683 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.429128 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kqlxx"] Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.455586 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.541755 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-2zxdz"] Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.542010 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" podUID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerName="dnsmasq-dns" containerID="cri-o://68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc" gracePeriod=10 Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.575149 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2cv" event={"ID":"7cc41660-aad2-4b4e-8594-7ad053989bb0","Type":"ContainerStarted","Data":"ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9"} Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.578289 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.584858 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.590575 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.590734 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.590872 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.591055 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-sfgg8" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.604402 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqlxx" event={"ID":"0c9953d6-c8ce-493f-af50-54a9aa85a7a3","Type":"ContainerStarted","Data":"3fd920ce5ddff34c6a15ad571ddc3547b6e3595c1020ea19a9a904293debb67b"} Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.605797 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.729783 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-config\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.729864 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-scripts\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.729941 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.729973 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vw7j\" (UniqueName: \"kubernetes.io/projected/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-kube-api-access-9vw7j\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.730013 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.730067 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.730089 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.832436 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.832483 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-config\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.832506 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-scripts\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.832612 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.832648 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vw7j\" (UniqueName: \"kubernetes.io/projected/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-kube-api-access-9vw7j\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.832698 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.832749 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.833388 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.833893 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-config\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.833894 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-scripts\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.838398 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.839292 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.840869 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:46 crc kubenswrapper[4903]: I0128 17:15:46.857502 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vw7j\" (UniqueName: \"kubernetes.io/projected/d32e25a1-1961-4a59-9d27-7a8fb08a4b97-kube-api-access-9vw7j\") pod \"ovn-northd-0\" (UID: \"d32e25a1-1961-4a59-9d27-7a8fb08a4b97\") " pod="openstack/ovn-northd-0" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.056453 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.059761 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.137476 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9cmg\" (UniqueName: \"kubernetes.io/projected/d0b61dea-09c9-4364-9eaf-bf0e94729d30-kube-api-access-l9cmg\") pod \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.139272 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-dns-svc\") pod \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.139384 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-config\") pod \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\" (UID: \"d0b61dea-09c9-4364-9eaf-bf0e94729d30\") " Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.141371 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b61dea-09c9-4364-9eaf-bf0e94729d30-kube-api-access-l9cmg" (OuterVolumeSpecName: "kube-api-access-l9cmg") pod "d0b61dea-09c9-4364-9eaf-bf0e94729d30" (UID: "d0b61dea-09c9-4364-9eaf-bf0e94729d30"). InnerVolumeSpecName "kube-api-access-l9cmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.190699 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-config" (OuterVolumeSpecName: "config") pod "d0b61dea-09c9-4364-9eaf-bf0e94729d30" (UID: "d0b61dea-09c9-4364-9eaf-bf0e94729d30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.203855 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0b61dea-09c9-4364-9eaf-bf0e94729d30" (UID: "d0b61dea-09c9-4364-9eaf-bf0e94729d30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.243065 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9cmg\" (UniqueName: \"kubernetes.io/projected/d0b61dea-09c9-4364-9eaf-bf0e94729d30-kube-api-access-l9cmg\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.243104 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.243116 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b61dea-09c9-4364-9eaf-bf0e94729d30-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:47 crc kubenswrapper[4903]: W0128 17:15:47.511362 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd32e25a1_1961_4a59_9d27_7a8fb08a4b97.slice/crio-0ab46a1d5d6cda14322d2a4cd8e7356ba110a25865899e3e0d8067bd63700c08 WatchSource:0}: Error finding container 0ab46a1d5d6cda14322d2a4cd8e7356ba110a25865899e3e0d8067bd63700c08: Status 404 returned error can't find the container with id 0ab46a1d5d6cda14322d2a4cd8e7356ba110a25865899e3e0d8067bd63700c08 Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.511874 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.612969 4903 generic.go:334] "Generic (PLEG): container finished" podID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerID="f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430" exitCode=0 Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.613154 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqlxx" event={"ID":"0c9953d6-c8ce-493f-af50-54a9aa85a7a3","Type":"ContainerDied","Data":"f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430"} Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.615450 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.621369 4903 generic.go:334] "Generic (PLEG): container finished" podID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerID="ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9" exitCode=0 Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.621497 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2cv" event={"ID":"7cc41660-aad2-4b4e-8594-7ad053989bb0","Type":"ContainerDied","Data":"ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9"} Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.622757 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d32e25a1-1961-4a59-9d27-7a8fb08a4b97","Type":"ContainerStarted","Data":"0ab46a1d5d6cda14322d2a4cd8e7356ba110a25865899e3e0d8067bd63700c08"} Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.625691 4903 generic.go:334] "Generic (PLEG): container finished" podID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerID="68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc" exitCode=0 Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.625726 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" event={"ID":"d0b61dea-09c9-4364-9eaf-bf0e94729d30","Type":"ContainerDied","Data":"68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc"} Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.625754 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" event={"ID":"d0b61dea-09c9-4364-9eaf-bf0e94729d30","Type":"ContainerDied","Data":"5f7dd91edfa0fe38439234ccb4c165bfa9631315c62df2946ca2dd684fb1913b"} Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.625774 4903 scope.go:117] "RemoveContainer" containerID="68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.625847 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-2zxdz" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.658714 4903 scope.go:117] "RemoveContainer" containerID="feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.675179 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-2zxdz"] Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.681094 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-2zxdz"] Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.686051 4903 scope.go:117] "RemoveContainer" containerID="68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc" Jan 28 17:15:47 crc kubenswrapper[4903]: E0128 17:15:47.688035 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc\": container with ID starting with 68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc not found: ID does not exist" containerID="68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.688069 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc"} err="failed to get container status \"68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc\": rpc error: code = NotFound desc = could not find container \"68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc\": container with ID starting with 68a286432f90146eea888810d05764fd65312359afaa54c7c52078b5b4d97bfc not found: ID does not exist" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.688093 4903 scope.go:117] "RemoveContainer" containerID="feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128" Jan 28 17:15:47 crc kubenswrapper[4903]: E0128 17:15:47.688493 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128\": container with ID starting with feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128 not found: ID does not exist" containerID="feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128" Jan 28 17:15:47 crc kubenswrapper[4903]: I0128 17:15:47.688549 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128"} err="failed to get container status \"feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128\": rpc error: code = NotFound desc = could not find container \"feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128\": container with ID starting with feecc32b4805efa076808ec964d1236525fe9616b2811443fdbd2f8a3d4ce128 not found: ID does not exist" Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.439026 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" path="/var/lib/kubelet/pods/d0b61dea-09c9-4364-9eaf-bf0e94729d30/volumes" Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.644988 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d32e25a1-1961-4a59-9d27-7a8fb08a4b97","Type":"ContainerStarted","Data":"f1a0cc46b7b4fe7017fb80a8813ba24d064d05f9d07c58e920e55140b56d7f1c"} Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.645029 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d32e25a1-1961-4a59-9d27-7a8fb08a4b97","Type":"ContainerStarted","Data":"9d82f9d26520ead028e4f8d6290d22b69c3873093b4b8f778793fef9070a8e2d"} Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.645065 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.650284 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqlxx" event={"ID":"0c9953d6-c8ce-493f-af50-54a9aa85a7a3","Type":"ContainerStarted","Data":"2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38"} Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.654478 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2cv" event={"ID":"7cc41660-aad2-4b4e-8594-7ad053989bb0","Type":"ContainerStarted","Data":"2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6"} Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.674701 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.674679614 podStartE2EDuration="2.674679614s" podCreationTimestamp="2026-01-28 17:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:48.665895046 +0000 UTC m=+5420.941866567" watchObservedRunningTime="2026-01-28 17:15:48.674679614 +0000 UTC m=+5420.950651125" Jan 28 17:15:48 crc kubenswrapper[4903]: I0128 17:15:48.713891 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kh2cv" podStartSLOduration=2.21431612 podStartE2EDuration="5.713860397s" podCreationTimestamp="2026-01-28 17:15:43 +0000 UTC" firstStartedPulling="2026-01-28 17:15:44.503738825 +0000 UTC m=+5416.779710336" lastFinishedPulling="2026-01-28 17:15:48.003283102 +0000 UTC m=+5420.279254613" observedRunningTime="2026-01-28 17:15:48.703784143 +0000 UTC m=+5420.979755664" watchObservedRunningTime="2026-01-28 17:15:48.713860397 +0000 UTC m=+5420.989831908" Jan 28 17:15:49 crc kubenswrapper[4903]: I0128 17:15:49.662910 4903 generic.go:334] "Generic (PLEG): container finished" podID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerID="2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38" exitCode=0 Jan 28 17:15:49 crc kubenswrapper[4903]: I0128 17:15:49.662959 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqlxx" event={"ID":"0c9953d6-c8ce-493f-af50-54a9aa85a7a3","Type":"ContainerDied","Data":"2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38"} Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.619977 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kq2xd"] Jan 28 17:15:51 crc kubenswrapper[4903]: E0128 17:15:51.620281 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerName="dnsmasq-dns" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.620292 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerName="dnsmasq-dns" Jan 28 17:15:51 crc kubenswrapper[4903]: E0128 17:15:51.620325 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerName="init" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.620331 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerName="init" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.620489 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b61dea-09c9-4364-9eaf-bf0e94729d30" containerName="dnsmasq-dns" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.620984 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.635931 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kq2xd"] Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.719406 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f4a1-account-create-update-4rsdx"] Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.720403 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.723984 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.727465 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f4a1-account-create-update-4rsdx"] Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.733925 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb83cac8-698a-483b-9643-1f6f37fdd873-operator-scripts\") pod \"keystone-db-create-kq2xd\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.733987 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72x57\" (UniqueName: \"kubernetes.io/projected/eb83cac8-698a-483b-9643-1f6f37fdd873-kube-api-access-72x57\") pod \"keystone-db-create-kq2xd\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.836450 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb83cac8-698a-483b-9643-1f6f37fdd873-operator-scripts\") pod \"keystone-db-create-kq2xd\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.836521 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a9abc-0511-41a5-8409-f1b5411ddff0-operator-scripts\") pod \"keystone-f4a1-account-create-update-4rsdx\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.836708 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72x57\" (UniqueName: \"kubernetes.io/projected/eb83cac8-698a-483b-9643-1f6f37fdd873-kube-api-access-72x57\") pod \"keystone-db-create-kq2xd\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.836759 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hmv2\" (UniqueName: \"kubernetes.io/projected/b08a9abc-0511-41a5-8409-f1b5411ddff0-kube-api-access-4hmv2\") pod \"keystone-f4a1-account-create-update-4rsdx\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.938331 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a9abc-0511-41a5-8409-f1b5411ddff0-operator-scripts\") pod \"keystone-f4a1-account-create-update-4rsdx\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.938753 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hmv2\" (UniqueName: \"kubernetes.io/projected/b08a9abc-0511-41a5-8409-f1b5411ddff0-kube-api-access-4hmv2\") pod \"keystone-f4a1-account-create-update-4rsdx\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.939202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a9abc-0511-41a5-8409-f1b5411ddff0-operator-scripts\") pod \"keystone-f4a1-account-create-update-4rsdx\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.939710 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb83cac8-698a-483b-9643-1f6f37fdd873-operator-scripts\") pod \"keystone-db-create-kq2xd\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.940247 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72x57\" (UniqueName: \"kubernetes.io/projected/eb83cac8-698a-483b-9643-1f6f37fdd873-kube-api-access-72x57\") pod \"keystone-db-create-kq2xd\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:51 crc kubenswrapper[4903]: I0128 17:15:51.956243 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hmv2\" (UniqueName: \"kubernetes.io/projected/b08a9abc-0511-41a5-8409-f1b5411ddff0-kube-api-access-4hmv2\") pod \"keystone-f4a1-account-create-update-4rsdx\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:52 crc kubenswrapper[4903]: I0128 17:15:52.089743 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:52 crc kubenswrapper[4903]: I0128 17:15:52.238037 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:52 crc kubenswrapper[4903]: I0128 17:15:52.523692 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f4a1-account-create-update-4rsdx"] Jan 28 17:15:52 crc kubenswrapper[4903]: I0128 17:15:52.691060 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqlxx" event={"ID":"0c9953d6-c8ce-493f-af50-54a9aa85a7a3","Type":"ContainerStarted","Data":"19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361"} Jan 28 17:15:52 crc kubenswrapper[4903]: W0128 17:15:52.921498 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08a9abc_0511_41a5_8409_f1b5411ddff0.slice/crio-4fd724500372ee9d8a6c9705311bf0712c6078100cc170d12bd6336535c9222f WatchSource:0}: Error finding container 4fd724500372ee9d8a6c9705311bf0712c6078100cc170d12bd6336535c9222f: Status 404 returned error can't find the container with id 4fd724500372ee9d8a6c9705311bf0712c6078100cc170d12bd6336535c9222f Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.079343 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kq2xd"] Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.471362 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.471401 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.513411 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.571787 4903 scope.go:117] "RemoveContainer" containerID="3612c95b5cae04b8a083cca14ad662a3d5d412ba6e76178fb8ea385e9dfaab00" Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.593849 4903 scope.go:117] "RemoveContainer" containerID="deed94d927913b00c5c6c75e56f8e4c3e0c6802d21cc2ef40918ad468180d7a8" Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.700750 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f4a1-account-create-update-4rsdx" event={"ID":"b08a9abc-0511-41a5-8409-f1b5411ddff0","Type":"ContainerStarted","Data":"4fd724500372ee9d8a6c9705311bf0712c6078100cc170d12bd6336535c9222f"} Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.702858 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kq2xd" event={"ID":"eb83cac8-698a-483b-9643-1f6f37fdd873","Type":"ContainerStarted","Data":"b58fca590e28d14fe69e8e26284613ac1605fcf6dd1f9cf64bb29435465aee49"} Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.745399 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:53 crc kubenswrapper[4903]: I0128 17:15:53.766687 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kqlxx" podStartSLOduration=5.878951364 podStartE2EDuration="8.766667946s" podCreationTimestamp="2026-01-28 17:15:45 +0000 UTC" firstStartedPulling="2026-01-28 17:15:47.615123562 +0000 UTC m=+5419.891095073" lastFinishedPulling="2026-01-28 17:15:50.502840144 +0000 UTC m=+5422.778811655" observedRunningTime="2026-01-28 17:15:53.72401279 +0000 UTC m=+5425.999984301" watchObservedRunningTime="2026-01-28 17:15:53.766667946 +0000 UTC m=+5426.042639457" Jan 28 17:15:54 crc kubenswrapper[4903]: I0128 17:15:54.714176 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kq2xd" event={"ID":"eb83cac8-698a-483b-9643-1f6f37fdd873","Type":"ContainerStarted","Data":"b8ba782e487a3828fcd16534bfa296cd5b1788b29e6d891cc1269a22c10222d5"} Jan 28 17:15:54 crc kubenswrapper[4903]: I0128 17:15:54.716251 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f4a1-account-create-update-4rsdx" event={"ID":"b08a9abc-0511-41a5-8409-f1b5411ddff0","Type":"ContainerStarted","Data":"3eeacf0beeb740d67e93d1be76a9e4be7ecb0a22652d5dbbdd2ae458273d7c69"} Jan 28 17:15:54 crc kubenswrapper[4903]: I0128 17:15:54.738613 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-kq2xd" podStartSLOduration=3.738596141 podStartE2EDuration="3.738596141s" podCreationTimestamp="2026-01-28 17:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:54.731043166 +0000 UTC m=+5427.007014717" watchObservedRunningTime="2026-01-28 17:15:54.738596141 +0000 UTC m=+5427.014567652" Jan 28 17:15:54 crc kubenswrapper[4903]: I0128 17:15:54.751301 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f4a1-account-create-update-4rsdx" podStartSLOduration=3.751285815 podStartE2EDuration="3.751285815s" podCreationTimestamp="2026-01-28 17:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:15:54.750696929 +0000 UTC m=+5427.026668500" watchObservedRunningTime="2026-01-28 17:15:54.751285815 +0000 UTC m=+5427.027257326" Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.127057 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kh2cv"] Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.734888 4903 generic.go:334] "Generic (PLEG): container finished" podID="eb83cac8-698a-483b-9643-1f6f37fdd873" containerID="b8ba782e487a3828fcd16534bfa296cd5b1788b29e6d891cc1269a22c10222d5" exitCode=0 Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.734987 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kq2xd" event={"ID":"eb83cac8-698a-483b-9643-1f6f37fdd873","Type":"ContainerDied","Data":"b8ba782e487a3828fcd16534bfa296cd5b1788b29e6d891cc1269a22c10222d5"} Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.737308 4903 generic.go:334] "Generic (PLEG): container finished" podID="b08a9abc-0511-41a5-8409-f1b5411ddff0" containerID="3eeacf0beeb740d67e93d1be76a9e4be7ecb0a22652d5dbbdd2ae458273d7c69" exitCode=0 Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.737714 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kh2cv" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="registry-server" containerID="cri-o://2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6" gracePeriod=2 Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.739410 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f4a1-account-create-update-4rsdx" event={"ID":"b08a9abc-0511-41a5-8409-f1b5411ddff0","Type":"ContainerDied","Data":"3eeacf0beeb740d67e93d1be76a9e4be7ecb0a22652d5dbbdd2ae458273d7c69"} Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.900460 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.900616 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:55 crc kubenswrapper[4903]: I0128 17:15:55.986738 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.275733 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.421787 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2npc\" (UniqueName: \"kubernetes.io/projected/7cc41660-aad2-4b4e-8594-7ad053989bb0-kube-api-access-l2npc\") pod \"7cc41660-aad2-4b4e-8594-7ad053989bb0\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.422017 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-catalog-content\") pod \"7cc41660-aad2-4b4e-8594-7ad053989bb0\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.422040 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-utilities\") pod \"7cc41660-aad2-4b4e-8594-7ad053989bb0\" (UID: \"7cc41660-aad2-4b4e-8594-7ad053989bb0\") " Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.423026 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-utilities" (OuterVolumeSpecName: "utilities") pod "7cc41660-aad2-4b4e-8594-7ad053989bb0" (UID: "7cc41660-aad2-4b4e-8594-7ad053989bb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.430086 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc41660-aad2-4b4e-8594-7ad053989bb0-kube-api-access-l2npc" (OuterVolumeSpecName: "kube-api-access-l2npc") pod "7cc41660-aad2-4b4e-8594-7ad053989bb0" (UID: "7cc41660-aad2-4b4e-8594-7ad053989bb0"). InnerVolumeSpecName "kube-api-access-l2npc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.497915 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cc41660-aad2-4b4e-8594-7ad053989bb0" (UID: "7cc41660-aad2-4b4e-8594-7ad053989bb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.524116 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.524159 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc41660-aad2-4b4e-8594-7ad053989bb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.524168 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2npc\" (UniqueName: \"kubernetes.io/projected/7cc41660-aad2-4b4e-8594-7ad053989bb0-kube-api-access-l2npc\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.755064 4903 generic.go:334] "Generic (PLEG): container finished" podID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerID="2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6" exitCode=0 Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.755139 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kh2cv" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.755193 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2cv" event={"ID":"7cc41660-aad2-4b4e-8594-7ad053989bb0","Type":"ContainerDied","Data":"2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6"} Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.755246 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kh2cv" event={"ID":"7cc41660-aad2-4b4e-8594-7ad053989bb0","Type":"ContainerDied","Data":"81897ae28af3a8421432a6893a7f19f079a3873625dbb73b67225c6c5a1c74cf"} Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.755275 4903 scope.go:117] "RemoveContainer" containerID="2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.806920 4903 scope.go:117] "RemoveContainer" containerID="ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.814494 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kh2cv"] Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.824785 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kh2cv"] Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.848931 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.852890 4903 scope.go:117] "RemoveContainer" containerID="67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.899232 4903 scope.go:117] "RemoveContainer" containerID="2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6" Jan 28 17:15:56 crc kubenswrapper[4903]: E0128 17:15:56.899588 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6\": container with ID starting with 2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6 not found: ID does not exist" containerID="2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.899636 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6"} err="failed to get container status \"2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6\": rpc error: code = NotFound desc = could not find container \"2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6\": container with ID starting with 2689758880cbca2b47b749d4ab31a8006d4be4e4311cd488d778ebc7626da4d6 not found: ID does not exist" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.899660 4903 scope.go:117] "RemoveContainer" containerID="ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9" Jan 28 17:15:56 crc kubenswrapper[4903]: E0128 17:15:56.899922 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9\": container with ID starting with ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9 not found: ID does not exist" containerID="ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.899947 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9"} err="failed to get container status \"ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9\": rpc error: code = NotFound desc = could not find container \"ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9\": container with ID starting with ffdc3f8a1933c9018b625614628447bff18ffa6a9f32fb3356acaca445b75dd9 not found: ID does not exist" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.899963 4903 scope.go:117] "RemoveContainer" containerID="67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8" Jan 28 17:15:56 crc kubenswrapper[4903]: E0128 17:15:56.900356 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8\": container with ID starting with 67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8 not found: ID does not exist" containerID="67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8" Jan 28 17:15:56 crc kubenswrapper[4903]: I0128 17:15:56.900402 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8"} err="failed to get container status \"67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8\": rpc error: code = NotFound desc = could not find container \"67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8\": container with ID starting with 67db9a78f0503416717952bd8c6280615b64bfede4c6a1cdbc3bde7ee56ef0d8 not found: ID does not exist" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.142858 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.223166 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.226482 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.338132 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hmv2\" (UniqueName: \"kubernetes.io/projected/b08a9abc-0511-41a5-8409-f1b5411ddff0-kube-api-access-4hmv2\") pod \"b08a9abc-0511-41a5-8409-f1b5411ddff0\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.338236 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72x57\" (UniqueName: \"kubernetes.io/projected/eb83cac8-698a-483b-9643-1f6f37fdd873-kube-api-access-72x57\") pod \"eb83cac8-698a-483b-9643-1f6f37fdd873\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.338432 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a9abc-0511-41a5-8409-f1b5411ddff0-operator-scripts\") pod \"b08a9abc-0511-41a5-8409-f1b5411ddff0\" (UID: \"b08a9abc-0511-41a5-8409-f1b5411ddff0\") " Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.338509 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb83cac8-698a-483b-9643-1f6f37fdd873-operator-scripts\") pod \"eb83cac8-698a-483b-9643-1f6f37fdd873\" (UID: \"eb83cac8-698a-483b-9643-1f6f37fdd873\") " Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.338942 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08a9abc-0511-41a5-8409-f1b5411ddff0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b08a9abc-0511-41a5-8409-f1b5411ddff0" (UID: "b08a9abc-0511-41a5-8409-f1b5411ddff0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.339023 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb83cac8-698a-483b-9643-1f6f37fdd873-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb83cac8-698a-483b-9643-1f6f37fdd873" (UID: "eb83cac8-698a-483b-9643-1f6f37fdd873"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.343775 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb83cac8-698a-483b-9643-1f6f37fdd873-kube-api-access-72x57" (OuterVolumeSpecName: "kube-api-access-72x57") pod "eb83cac8-698a-483b-9643-1f6f37fdd873" (UID: "eb83cac8-698a-483b-9643-1f6f37fdd873"). InnerVolumeSpecName "kube-api-access-72x57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.344128 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08a9abc-0511-41a5-8409-f1b5411ddff0-kube-api-access-4hmv2" (OuterVolumeSpecName: "kube-api-access-4hmv2") pod "b08a9abc-0511-41a5-8409-f1b5411ddff0" (UID: "b08a9abc-0511-41a5-8409-f1b5411ddff0"). InnerVolumeSpecName "kube-api-access-4hmv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.440725 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08a9abc-0511-41a5-8409-f1b5411ddff0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.440781 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb83cac8-698a-483b-9643-1f6f37fdd873-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.440792 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hmv2\" (UniqueName: \"kubernetes.io/projected/b08a9abc-0511-41a5-8409-f1b5411ddff0-kube-api-access-4hmv2\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.440802 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72x57\" (UniqueName: \"kubernetes.io/projected/eb83cac8-698a-483b-9643-1f6f37fdd873-kube-api-access-72x57\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.766255 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f4a1-account-create-update-4rsdx" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.766275 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f4a1-account-create-update-4rsdx" event={"ID":"b08a9abc-0511-41a5-8409-f1b5411ddff0","Type":"ContainerDied","Data":"4fd724500372ee9d8a6c9705311bf0712c6078100cc170d12bd6336535c9222f"} Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.767810 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd724500372ee9d8a6c9705311bf0712c6078100cc170d12bd6336535c9222f" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.769330 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kq2xd" Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.769379 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kq2xd" event={"ID":"eb83cac8-698a-483b-9643-1f6f37fdd873","Type":"ContainerDied","Data":"b58fca590e28d14fe69e8e26284613ac1605fcf6dd1f9cf64bb29435465aee49"} Jan 28 17:15:57 crc kubenswrapper[4903]: I0128 17:15:57.769442 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58fca590e28d14fe69e8e26284613ac1605fcf6dd1f9cf64bb29435465aee49" Jan 28 17:15:58 crc kubenswrapper[4903]: I0128 17:15:58.326409 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kqlxx"] Jan 28 17:15:58 crc kubenswrapper[4903]: I0128 17:15:58.426273 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" path="/var/lib/kubelet/pods/7cc41660-aad2-4b4e-8594-7ad053989bb0/volumes" Jan 28 17:15:58 crc kubenswrapper[4903]: I0128 17:15:58.779319 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kqlxx" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="registry-server" containerID="cri-o://19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361" gracePeriod=2 Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.192323 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.270940 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26v6c\" (UniqueName: \"kubernetes.io/projected/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-kube-api-access-26v6c\") pod \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.271074 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-catalog-content\") pod \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.271147 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-utilities\") pod \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\" (UID: \"0c9953d6-c8ce-493f-af50-54a9aa85a7a3\") " Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.272209 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-utilities" (OuterVolumeSpecName: "utilities") pod "0c9953d6-c8ce-493f-af50-54a9aa85a7a3" (UID: "0c9953d6-c8ce-493f-af50-54a9aa85a7a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.276524 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-kube-api-access-26v6c" (OuterVolumeSpecName: "kube-api-access-26v6c") pod "0c9953d6-c8ce-493f-af50-54a9aa85a7a3" (UID: "0c9953d6-c8ce-493f-af50-54a9aa85a7a3"). InnerVolumeSpecName "kube-api-access-26v6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.317233 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c9953d6-c8ce-493f-af50-54a9aa85a7a3" (UID: "0c9953d6-c8ce-493f-af50-54a9aa85a7a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.373259 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26v6c\" (UniqueName: \"kubernetes.io/projected/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-kube-api-access-26v6c\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.373296 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.373309 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c9953d6-c8ce-493f-af50-54a9aa85a7a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.794795 4903 generic.go:334] "Generic (PLEG): container finished" podID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerID="19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361" exitCode=0 Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.794874 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kqlxx" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.794904 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqlxx" event={"ID":"0c9953d6-c8ce-493f-af50-54a9aa85a7a3","Type":"ContainerDied","Data":"19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361"} Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.796063 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kqlxx" event={"ID":"0c9953d6-c8ce-493f-af50-54a9aa85a7a3","Type":"ContainerDied","Data":"3fd920ce5ddff34c6a15ad571ddc3547b6e3595c1020ea19a9a904293debb67b"} Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.796095 4903 scope.go:117] "RemoveContainer" containerID="19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.831771 4903 scope.go:117] "RemoveContainer" containerID="2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.837616 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kqlxx"] Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.846963 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kqlxx"] Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.858489 4903 scope.go:117] "RemoveContainer" containerID="f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.893023 4903 scope.go:117] "RemoveContainer" containerID="19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361" Jan 28 17:15:59 crc kubenswrapper[4903]: E0128 17:15:59.893484 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361\": container with ID starting with 19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361 not found: ID does not exist" containerID="19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.893543 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361"} err="failed to get container status \"19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361\": rpc error: code = NotFound desc = could not find container \"19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361\": container with ID starting with 19f356c75fc45fb015fb4777b25dd8f8cc3c026f1a2773a344ec4e0aaa70f361 not found: ID does not exist" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.893574 4903 scope.go:117] "RemoveContainer" containerID="2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38" Jan 28 17:15:59 crc kubenswrapper[4903]: E0128 17:15:59.893893 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38\": container with ID starting with 2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38 not found: ID does not exist" containerID="2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.893929 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38"} err="failed to get container status \"2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38\": rpc error: code = NotFound desc = could not find container \"2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38\": container with ID starting with 2542bb8ee8edfcb6e08235aacd72100d7d7ccda0f09ab641c733037677db9d38 not found: ID does not exist" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.893948 4903 scope.go:117] "RemoveContainer" containerID="f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430" Jan 28 17:15:59 crc kubenswrapper[4903]: E0128 17:15:59.894177 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430\": container with ID starting with f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430 not found: ID does not exist" containerID="f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430" Jan 28 17:15:59 crc kubenswrapper[4903]: I0128 17:15:59.894201 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430"} err="failed to get container status \"f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430\": rpc error: code = NotFound desc = could not find container \"f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430\": container with ID starting with f4568bbe03f57b1b911051cba9bc6ce60110a9689cc66e7cbac0c6e3d81a1430 not found: ID does not exist" Jan 28 17:16:00 crc kubenswrapper[4903]: I0128 17:16:00.422984 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" path="/var/lib/kubelet/pods/0c9953d6-c8ce-493f-af50-54a9aa85a7a3/volumes" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.349322 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pr2c7"] Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350009 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="extract-content" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350022 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="extract-content" Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350037 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08a9abc-0511-41a5-8409-f1b5411ddff0" containerName="mariadb-account-create-update" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350043 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08a9abc-0511-41a5-8409-f1b5411ddff0" containerName="mariadb-account-create-update" Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350062 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="extract-content" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350067 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="extract-content" Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350076 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="registry-server" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350083 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="registry-server" Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350093 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="registry-server" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350101 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="registry-server" Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350112 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="extract-utilities" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350118 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="extract-utilities" Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350130 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="extract-utilities" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350136 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="extract-utilities" Jan 28 17:16:02 crc kubenswrapper[4903]: E0128 17:16:02.350153 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb83cac8-698a-483b-9643-1f6f37fdd873" containerName="mariadb-database-create" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350161 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb83cac8-698a-483b-9643-1f6f37fdd873" containerName="mariadb-database-create" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350291 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9953d6-c8ce-493f-af50-54a9aa85a7a3" containerName="registry-server" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350306 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb83cac8-698a-483b-9643-1f6f37fdd873" containerName="mariadb-database-create" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350317 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc41660-aad2-4b4e-8594-7ad053989bb0" containerName="registry-server" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.350328 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08a9abc-0511-41a5-8409-f1b5411ddff0" containerName="mariadb-account-create-update" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.364946 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.370439 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.370706 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.370728 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gdl7c" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.370714 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.399703 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pr2c7"] Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.423601 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-config-data\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.423642 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-combined-ca-bundle\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.423756 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnbp\" (UniqueName: \"kubernetes.io/projected/523bbae2-5948-4985-978f-4c728efb853d-kube-api-access-2gnbp\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.525491 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-combined-ca-bundle\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.525674 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnbp\" (UniqueName: \"kubernetes.io/projected/523bbae2-5948-4985-978f-4c728efb853d-kube-api-access-2gnbp\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.526065 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-config-data\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.532826 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-combined-ca-bundle\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.532860 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-config-data\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.544854 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnbp\" (UniqueName: \"kubernetes.io/projected/523bbae2-5948-4985-978f-4c728efb853d-kube-api-access-2gnbp\") pod \"keystone-db-sync-pr2c7\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:02 crc kubenswrapper[4903]: I0128 17:16:02.696266 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:03 crc kubenswrapper[4903]: I0128 17:16:03.155714 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pr2c7"] Jan 28 17:16:03 crc kubenswrapper[4903]: I0128 17:16:03.842214 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pr2c7" event={"ID":"523bbae2-5948-4985-978f-4c728efb853d","Type":"ContainerStarted","Data":"82d81b0900522e9e47b68b6f811d992938d53ae7412c24caa20efc99a6da1cfe"} Jan 28 17:16:03 crc kubenswrapper[4903]: I0128 17:16:03.842517 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pr2c7" event={"ID":"523bbae2-5948-4985-978f-4c728efb853d","Type":"ContainerStarted","Data":"9362f8cec59d4ca7189466c1ddc808d65593f6c753aef19992a12ba29aec0102"} Jan 28 17:16:03 crc kubenswrapper[4903]: I0128 17:16:03.865290 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pr2c7" podStartSLOduration=1.865272176 podStartE2EDuration="1.865272176s" podCreationTimestamp="2026-01-28 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:16:03.860302341 +0000 UTC m=+5436.136273852" watchObservedRunningTime="2026-01-28 17:16:03.865272176 +0000 UTC m=+5436.141243677" Jan 28 17:16:05 crc kubenswrapper[4903]: I0128 17:16:05.862323 4903 generic.go:334] "Generic (PLEG): container finished" podID="523bbae2-5948-4985-978f-4c728efb853d" containerID="82d81b0900522e9e47b68b6f811d992938d53ae7412c24caa20efc99a6da1cfe" exitCode=0 Jan 28 17:16:05 crc kubenswrapper[4903]: I0128 17:16:05.862461 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pr2c7" event={"ID":"523bbae2-5948-4985-978f-4c728efb853d","Type":"ContainerDied","Data":"82d81b0900522e9e47b68b6f811d992938d53ae7412c24caa20efc99a6da1cfe"} Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.203192 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.327915 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-config-data\") pod \"523bbae2-5948-4985-978f-4c728efb853d\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.328006 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gnbp\" (UniqueName: \"kubernetes.io/projected/523bbae2-5948-4985-978f-4c728efb853d-kube-api-access-2gnbp\") pod \"523bbae2-5948-4985-978f-4c728efb853d\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.328144 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-combined-ca-bundle\") pod \"523bbae2-5948-4985-978f-4c728efb853d\" (UID: \"523bbae2-5948-4985-978f-4c728efb853d\") " Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.334313 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523bbae2-5948-4985-978f-4c728efb853d-kube-api-access-2gnbp" (OuterVolumeSpecName: "kube-api-access-2gnbp") pod "523bbae2-5948-4985-978f-4c728efb853d" (UID: "523bbae2-5948-4985-978f-4c728efb853d"). InnerVolumeSpecName "kube-api-access-2gnbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.354200 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "523bbae2-5948-4985-978f-4c728efb853d" (UID: "523bbae2-5948-4985-978f-4c728efb853d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.367382 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-config-data" (OuterVolumeSpecName: "config-data") pod "523bbae2-5948-4985-978f-4c728efb853d" (UID: "523bbae2-5948-4985-978f-4c728efb853d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.430329 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.430389 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gnbp\" (UniqueName: \"kubernetes.io/projected/523bbae2-5948-4985-978f-4c728efb853d-kube-api-access-2gnbp\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.430412 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523bbae2-5948-4985-978f-4c728efb853d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.881838 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pr2c7" event={"ID":"523bbae2-5948-4985-978f-4c728efb853d","Type":"ContainerDied","Data":"9362f8cec59d4ca7189466c1ddc808d65593f6c753aef19992a12ba29aec0102"} Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.882179 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9362f8cec59d4ca7189466c1ddc808d65593f6c753aef19992a12ba29aec0102" Jan 28 17:16:07 crc kubenswrapper[4903]: I0128 17:16:07.881911 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pr2c7" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.147082 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75f555c9df-76tds"] Jan 28 17:16:08 crc kubenswrapper[4903]: E0128 17:16:08.147465 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523bbae2-5948-4985-978f-4c728efb853d" containerName="keystone-db-sync" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.147487 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="523bbae2-5948-4985-978f-4c728efb853d" containerName="keystone-db-sync" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.147728 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="523bbae2-5948-4985-978f-4c728efb853d" containerName="keystone-db-sync" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.148761 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.170273 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zdv98"] Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.171626 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.175669 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.177647 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.177704 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.177870 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gdl7c" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.186190 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.188655 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zdv98"] Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.238763 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f555c9df-76tds"] Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245281 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v554w\" (UniqueName: \"kubernetes.io/projected/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-kube-api-access-v554w\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245373 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-dns-svc\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245400 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-combined-ca-bundle\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245418 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-sb\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245475 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkp2\" (UniqueName: \"kubernetes.io/projected/64310972-c89b-4d07-b959-e7ab26705cd3-kube-api-access-fmkp2\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245507 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-config\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245521 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-credential-keys\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245577 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-scripts\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245597 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-nb\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245614 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-config-data\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.245647 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-fernet-keys\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.347767 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-config\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.347835 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-credential-keys\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.347891 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-scripts\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.347913 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-nb\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.347935 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-config-data\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.347963 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-fernet-keys\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.347985 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v554w\" (UniqueName: \"kubernetes.io/projected/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-kube-api-access-v554w\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.348030 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-dns-svc\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.348063 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-combined-ca-bundle\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.348088 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-sb\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.348160 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmkp2\" (UniqueName: \"kubernetes.io/projected/64310972-c89b-4d07-b959-e7ab26705cd3-kube-api-access-fmkp2\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.349017 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-config\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.351417 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-dns-svc\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.351446 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-nb\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.354328 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-credential-keys\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.354472 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-combined-ca-bundle\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.354774 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-fernet-keys\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.355118 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-sb\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.358968 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-scripts\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.359329 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-config-data\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.369896 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmkp2\" (UniqueName: \"kubernetes.io/projected/64310972-c89b-4d07-b959-e7ab26705cd3-kube-api-access-fmkp2\") pod \"dnsmasq-dns-75f555c9df-76tds\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.373761 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v554w\" (UniqueName: \"kubernetes.io/projected/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-kube-api-access-v554w\") pod \"keystone-bootstrap-zdv98\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.475665 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.500437 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:08 crc kubenswrapper[4903]: I0128 17:16:08.928002 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75f555c9df-76tds"] Jan 28 17:16:08 crc kubenswrapper[4903]: W0128 17:16:08.928632 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64310972_c89b_4d07_b959_e7ab26705cd3.slice/crio-b95f89212ea3d3b28fff2f72d7f7909e8ccbd55709d26c6e9c030c8da53b8df0 WatchSource:0}: Error finding container b95f89212ea3d3b28fff2f72d7f7909e8ccbd55709d26c6e9c030c8da53b8df0: Status 404 returned error can't find the container with id b95f89212ea3d3b28fff2f72d7f7909e8ccbd55709d26c6e9c030c8da53b8df0 Jan 28 17:16:09 crc kubenswrapper[4903]: I0128 17:16:09.008404 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zdv98"] Jan 28 17:16:09 crc kubenswrapper[4903]: I0128 17:16:09.898451 4903 generic.go:334] "Generic (PLEG): container finished" podID="64310972-c89b-4d07-b959-e7ab26705cd3" containerID="f6ce6b68fd652cf13a492c0f75d08c9ed98dde01853267db79d37f5b3a606352" exitCode=0 Jan 28 17:16:09 crc kubenswrapper[4903]: I0128 17:16:09.898787 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f555c9df-76tds" event={"ID":"64310972-c89b-4d07-b959-e7ab26705cd3","Type":"ContainerDied","Data":"f6ce6b68fd652cf13a492c0f75d08c9ed98dde01853267db79d37f5b3a606352"} Jan 28 17:16:09 crc kubenswrapper[4903]: I0128 17:16:09.898812 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f555c9df-76tds" event={"ID":"64310972-c89b-4d07-b959-e7ab26705cd3","Type":"ContainerStarted","Data":"b95f89212ea3d3b28fff2f72d7f7909e8ccbd55709d26c6e9c030c8da53b8df0"} Jan 28 17:16:09 crc kubenswrapper[4903]: I0128 17:16:09.901248 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdv98" event={"ID":"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a","Type":"ContainerStarted","Data":"c1c41d02722fd4d483222b62bbceb32a3682720522031cbfcfdef71ca6f24e58"} Jan 28 17:16:09 crc kubenswrapper[4903]: I0128 17:16:09.901281 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdv98" event={"ID":"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a","Type":"ContainerStarted","Data":"8bea42f40d4efdcfff48a2acdc79b13faa377b49d5f29594cd90f0c92be933a6"} Jan 28 17:16:09 crc kubenswrapper[4903]: I0128 17:16:09.958264 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zdv98" podStartSLOduration=1.95824032 podStartE2EDuration="1.95824032s" podCreationTimestamp="2026-01-28 17:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:16:09.952082533 +0000 UTC m=+5442.228054044" watchObservedRunningTime="2026-01-28 17:16:09.95824032 +0000 UTC m=+5442.234211831" Jan 28 17:16:10 crc kubenswrapper[4903]: I0128 17:16:10.910636 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f555c9df-76tds" event={"ID":"64310972-c89b-4d07-b959-e7ab26705cd3","Type":"ContainerStarted","Data":"cdc4f47bab6ab45acf12f3fbec5673bc94fd40c66f5d514323a7b647ac99b657"} Jan 28 17:16:10 crc kubenswrapper[4903]: I0128 17:16:10.911745 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:10 crc kubenswrapper[4903]: I0128 17:16:10.938083 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75f555c9df-76tds" podStartSLOduration=2.938050778 podStartE2EDuration="2.938050778s" podCreationTimestamp="2026-01-28 17:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:16:10.929551257 +0000 UTC m=+5443.205522788" watchObservedRunningTime="2026-01-28 17:16:10.938050778 +0000 UTC m=+5443.214022289" Jan 28 17:16:12 crc kubenswrapper[4903]: I0128 17:16:12.928952 4903 generic.go:334] "Generic (PLEG): container finished" podID="dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" containerID="c1c41d02722fd4d483222b62bbceb32a3682720522031cbfcfdef71ca6f24e58" exitCode=0 Jan 28 17:16:12 crc kubenswrapper[4903]: I0128 17:16:12.929025 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdv98" event={"ID":"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a","Type":"ContainerDied","Data":"c1c41d02722fd4d483222b62bbceb32a3682720522031cbfcfdef71ca6f24e58"} Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.260347 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.362305 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-config-data\") pod \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.362667 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-combined-ca-bundle\") pod \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.362772 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v554w\" (UniqueName: \"kubernetes.io/projected/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-kube-api-access-v554w\") pod \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.362813 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-fernet-keys\") pod \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.362843 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-credential-keys\") pod \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.362969 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-scripts\") pod \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\" (UID: \"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a\") " Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.372985 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" (UID: "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.373018 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" (UID: "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.374904 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-kube-api-access-v554w" (OuterVolumeSpecName: "kube-api-access-v554w") pod "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" (UID: "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a"). InnerVolumeSpecName "kube-api-access-v554w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.375278 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-scripts" (OuterVolumeSpecName: "scripts") pod "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" (UID: "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.390402 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-config-data" (OuterVolumeSpecName: "config-data") pod "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" (UID: "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.392753 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" (UID: "dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.465997 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.466029 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.466048 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v554w\" (UniqueName: \"kubernetes.io/projected/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-kube-api-access-v554w\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.466062 4903 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.466072 4903 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.466082 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.949663 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zdv98" event={"ID":"dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a","Type":"ContainerDied","Data":"8bea42f40d4efdcfff48a2acdc79b13faa377b49d5f29594cd90f0c92be933a6"} Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.949702 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bea42f40d4efdcfff48a2acdc79b13faa377b49d5f29594cd90f0c92be933a6" Jan 28 17:16:14 crc kubenswrapper[4903]: I0128 17:16:14.949720 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zdv98" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.109346 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zdv98"] Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.116010 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zdv98"] Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.207450 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xz4k9"] Jan 28 17:16:15 crc kubenswrapper[4903]: E0128 17:16:15.207782 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" containerName="keystone-bootstrap" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.207800 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" containerName="keystone-bootstrap" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.207994 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" containerName="keystone-bootstrap" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.208515 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.211735 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.211778 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.212979 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.212992 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gdl7c" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.213007 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.227762 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xz4k9"] Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.280497 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-config-data\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.280574 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6kb\" (UniqueName: \"kubernetes.io/projected/6925b860-6acd-41e5-a575-5a3d6cb9bb64-kube-api-access-tc6kb\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.280605 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-credential-keys\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.280636 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-scripts\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.280689 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-combined-ca-bundle\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.280706 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-fernet-keys\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.382906 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-config-data\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.382976 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6kb\" (UniqueName: \"kubernetes.io/projected/6925b860-6acd-41e5-a575-5a3d6cb9bb64-kube-api-access-tc6kb\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.383004 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-credential-keys\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.383031 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-scripts\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.383066 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-combined-ca-bundle\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.383092 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-fernet-keys\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.387273 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-fernet-keys\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.387308 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-scripts\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.387282 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-credential-keys\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.388239 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-config-data\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.390273 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-combined-ca-bundle\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.401259 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6kb\" (UniqueName: \"kubernetes.io/projected/6925b860-6acd-41e5-a575-5a3d6cb9bb64-kube-api-access-tc6kb\") pod \"keystone-bootstrap-xz4k9\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.525020 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.932332 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xz4k9"] Jan 28 17:16:15 crc kubenswrapper[4903]: I0128 17:16:15.959151 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xz4k9" event={"ID":"6925b860-6acd-41e5-a575-5a3d6cb9bb64","Type":"ContainerStarted","Data":"b69fbd18f9bd893f0d70044e320ac58c24978031ec4d21ec2709918088e3b509"} Jan 28 17:16:16 crc kubenswrapper[4903]: I0128 17:16:16.424298 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a" path="/var/lib/kubelet/pods/dcc77120-30e2-4a0b-8c97-b3ee2f63ed0a/volumes" Jan 28 17:16:16 crc kubenswrapper[4903]: I0128 17:16:16.971152 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xz4k9" event={"ID":"6925b860-6acd-41e5-a575-5a3d6cb9bb64","Type":"ContainerStarted","Data":"b455b6c2c1a175d84654681a61f0a9ee65cdcb3d108ca5b17fd86f9cac54bfde"} Jan 28 17:16:17 crc kubenswrapper[4903]: I0128 17:16:17.007691 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xz4k9" podStartSLOduration=2.007662899 podStartE2EDuration="2.007662899s" podCreationTimestamp="2026-01-28 17:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:16:17.006027215 +0000 UTC m=+5449.281998736" watchObservedRunningTime="2026-01-28 17:16:17.007662899 +0000 UTC m=+5449.283634430" Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.477769 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.551212 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6998c99fcf-lzx7g"] Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.551442 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" podUID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerName="dnsmasq-dns" containerID="cri-o://ac7adb019d5a19fee0a814ce5780886268d62bceacd8f0c2daaa9a6f1d868dea" gracePeriod=10 Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.992681 4903 generic.go:334] "Generic (PLEG): container finished" podID="6925b860-6acd-41e5-a575-5a3d6cb9bb64" containerID="b455b6c2c1a175d84654681a61f0a9ee65cdcb3d108ca5b17fd86f9cac54bfde" exitCode=0 Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.992891 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xz4k9" event={"ID":"6925b860-6acd-41e5-a575-5a3d6cb9bb64","Type":"ContainerDied","Data":"b455b6c2c1a175d84654681a61f0a9ee65cdcb3d108ca5b17fd86f9cac54bfde"} Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.997786 4903 generic.go:334] "Generic (PLEG): container finished" podID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerID="ac7adb019d5a19fee0a814ce5780886268d62bceacd8f0c2daaa9a6f1d868dea" exitCode=0 Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.997848 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" event={"ID":"68b6e5f8-e4da-4d0b-a062-953348527ac6","Type":"ContainerDied","Data":"ac7adb019d5a19fee0a814ce5780886268d62bceacd8f0c2daaa9a6f1d868dea"} Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.997877 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" event={"ID":"68b6e5f8-e4da-4d0b-a062-953348527ac6","Type":"ContainerDied","Data":"b9bbefdf90921b4e4b1c90d7b7c2fa665559d9bf8c21989f44c64842ef8a66e6"} Jan 28 17:16:18 crc kubenswrapper[4903]: I0128 17:16:18.997890 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bbefdf90921b4e4b1c90d7b7c2fa665559d9bf8c21989f44c64842ef8a66e6" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.031809 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.165071 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5dv8\" (UniqueName: \"kubernetes.io/projected/68b6e5f8-e4da-4d0b-a062-953348527ac6-kube-api-access-h5dv8\") pod \"68b6e5f8-e4da-4d0b-a062-953348527ac6\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.165195 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-sb\") pod \"68b6e5f8-e4da-4d0b-a062-953348527ac6\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.165229 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-dns-svc\") pod \"68b6e5f8-e4da-4d0b-a062-953348527ac6\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.165269 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-nb\") pod \"68b6e5f8-e4da-4d0b-a062-953348527ac6\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.165382 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-config\") pod \"68b6e5f8-e4da-4d0b-a062-953348527ac6\" (UID: \"68b6e5f8-e4da-4d0b-a062-953348527ac6\") " Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.182019 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b6e5f8-e4da-4d0b-a062-953348527ac6-kube-api-access-h5dv8" (OuterVolumeSpecName: "kube-api-access-h5dv8") pod "68b6e5f8-e4da-4d0b-a062-953348527ac6" (UID: "68b6e5f8-e4da-4d0b-a062-953348527ac6"). InnerVolumeSpecName "kube-api-access-h5dv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.213050 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68b6e5f8-e4da-4d0b-a062-953348527ac6" (UID: "68b6e5f8-e4da-4d0b-a062-953348527ac6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.214120 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68b6e5f8-e4da-4d0b-a062-953348527ac6" (UID: "68b6e5f8-e4da-4d0b-a062-953348527ac6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.218642 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68b6e5f8-e4da-4d0b-a062-953348527ac6" (UID: "68b6e5f8-e4da-4d0b-a062-953348527ac6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.223879 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-config" (OuterVolumeSpecName: "config") pod "68b6e5f8-e4da-4d0b-a062-953348527ac6" (UID: "68b6e5f8-e4da-4d0b-a062-953348527ac6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.266822 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.266848 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.266858 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5dv8\" (UniqueName: \"kubernetes.io/projected/68b6e5f8-e4da-4d0b-a062-953348527ac6-kube-api-access-h5dv8\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.266867 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:19 crc kubenswrapper[4903]: I0128 17:16:19.266877 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6e5f8-e4da-4d0b-a062-953348527ac6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.006731 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6998c99fcf-lzx7g" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.044415 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6998c99fcf-lzx7g"] Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.052471 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6998c99fcf-lzx7g"] Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.314072 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.386230 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-credential-keys\") pod \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.386629 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc6kb\" (UniqueName: \"kubernetes.io/projected/6925b860-6acd-41e5-a575-5a3d6cb9bb64-kube-api-access-tc6kb\") pod \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.386739 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-scripts\") pod \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.386852 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-fernet-keys\") pod \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.386990 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-config-data\") pod \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.387155 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-combined-ca-bundle\") pod \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\" (UID: \"6925b860-6acd-41e5-a575-5a3d6cb9bb64\") " Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.391193 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6925b860-6acd-41e5-a575-5a3d6cb9bb64" (UID: "6925b860-6acd-41e5-a575-5a3d6cb9bb64"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.391230 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6925b860-6acd-41e5-a575-5a3d6cb9bb64-kube-api-access-tc6kb" (OuterVolumeSpecName: "kube-api-access-tc6kb") pod "6925b860-6acd-41e5-a575-5a3d6cb9bb64" (UID: "6925b860-6acd-41e5-a575-5a3d6cb9bb64"). InnerVolumeSpecName "kube-api-access-tc6kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.392016 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6925b860-6acd-41e5-a575-5a3d6cb9bb64" (UID: "6925b860-6acd-41e5-a575-5a3d6cb9bb64"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.393611 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-scripts" (OuterVolumeSpecName: "scripts") pod "6925b860-6acd-41e5-a575-5a3d6cb9bb64" (UID: "6925b860-6acd-41e5-a575-5a3d6cb9bb64"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.407410 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-config-data" (OuterVolumeSpecName: "config-data") pod "6925b860-6acd-41e5-a575-5a3d6cb9bb64" (UID: "6925b860-6acd-41e5-a575-5a3d6cb9bb64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.408377 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6925b860-6acd-41e5-a575-5a3d6cb9bb64" (UID: "6925b860-6acd-41e5-a575-5a3d6cb9bb64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.424504 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b6e5f8-e4da-4d0b-a062-953348527ac6" path="/var/lib/kubelet/pods/68b6e5f8-e4da-4d0b-a062-953348527ac6/volumes" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.490989 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.491234 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.491248 4903 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.491258 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc6kb\" (UniqueName: \"kubernetes.io/projected/6925b860-6acd-41e5-a575-5a3d6cb9bb64-kube-api-access-tc6kb\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.491266 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:20 crc kubenswrapper[4903]: I0128 17:16:20.491276 4903 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6925b860-6acd-41e5-a575-5a3d6cb9bb64-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.017704 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xz4k9" event={"ID":"6925b860-6acd-41e5-a575-5a3d6cb9bb64","Type":"ContainerDied","Data":"b69fbd18f9bd893f0d70044e320ac58c24978031ec4d21ec2709918088e3b509"} Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.017750 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69fbd18f9bd893f0d70044e320ac58c24978031ec4d21ec2709918088e3b509" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.018675 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xz4k9" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.500472 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7779468765-jvvm4"] Jan 28 17:16:21 crc kubenswrapper[4903]: E0128 17:16:21.500875 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6925b860-6acd-41e5-a575-5a3d6cb9bb64" containerName="keystone-bootstrap" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.500898 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6925b860-6acd-41e5-a575-5a3d6cb9bb64" containerName="keystone-bootstrap" Jan 28 17:16:21 crc kubenswrapper[4903]: E0128 17:16:21.500926 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerName="dnsmasq-dns" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.500935 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerName="dnsmasq-dns" Jan 28 17:16:21 crc kubenswrapper[4903]: E0128 17:16:21.500966 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerName="init" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.500974 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerName="init" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.501147 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b6e5f8-e4da-4d0b-a062-953348527ac6" containerName="dnsmasq-dns" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.501170 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6925b860-6acd-41e5-a575-5a3d6cb9bb64" containerName="keystone-bootstrap" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.501847 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.504478 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.504639 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.504684 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.504693 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.504715 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gdl7c" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.505022 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.513099 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7779468765-jvvm4"] Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.609767 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-internal-tls-certs\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.609869 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-fernet-keys\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.609919 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-credential-keys\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.610035 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-public-tls-certs\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.610103 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-combined-ca-bundle\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.610165 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krj4t\" (UniqueName: \"kubernetes.io/projected/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-kube-api-access-krj4t\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.610213 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-config-data\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.610591 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-scripts\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712468 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-scripts\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712564 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-internal-tls-certs\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712619 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-fernet-keys\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712663 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-credential-keys\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712690 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-combined-ca-bundle\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712717 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-public-tls-certs\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712746 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krj4t\" (UniqueName: \"kubernetes.io/projected/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-kube-api-access-krj4t\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.712983 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-config-data\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.718060 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-credential-keys\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.718076 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-public-tls-certs\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.718084 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-fernet-keys\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.718668 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-combined-ca-bundle\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.718684 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-scripts\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.720570 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-internal-tls-certs\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.724210 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-config-data\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.729982 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krj4t\" (UniqueName: \"kubernetes.io/projected/dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf-kube-api-access-krj4t\") pod \"keystone-7779468765-jvvm4\" (UID: \"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf\") " pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:21 crc kubenswrapper[4903]: I0128 17:16:21.823642 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:22 crc kubenswrapper[4903]: I0128 17:16:22.277909 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7779468765-jvvm4"] Jan 28 17:16:23 crc kubenswrapper[4903]: I0128 17:16:23.039153 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7779468765-jvvm4" event={"ID":"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf","Type":"ContainerStarted","Data":"497e532d4f7b6d6db6ebc94b50323a9ed2196796114d632717b75ae0b9fee196"} Jan 28 17:16:23 crc kubenswrapper[4903]: I0128 17:16:23.039496 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7779468765-jvvm4" event={"ID":"dc3c5fe9-b478-4d78-b1a4-1c371bdb05cf","Type":"ContainerStarted","Data":"b60bdb9fbcd9136db2b7819cbe5749b68f1b10218150e6b444f9522e7d51f915"} Jan 28 17:16:23 crc kubenswrapper[4903]: I0128 17:16:23.039518 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:23 crc kubenswrapper[4903]: I0128 17:16:23.058450 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7779468765-jvvm4" podStartSLOduration=2.058428459 podStartE2EDuration="2.058428459s" podCreationTimestamp="2026-01-28 17:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:16:23.05515024 +0000 UTC m=+5455.331121771" watchObservedRunningTime="2026-01-28 17:16:23.058428459 +0000 UTC m=+5455.334399970" Jan 28 17:16:26 crc kubenswrapper[4903]: I0128 17:16:26.613475 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:16:26 crc kubenswrapper[4903]: I0128 17:16:26.614074 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:16:53 crc kubenswrapper[4903]: I0128 17:16:53.489033 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7779468765-jvvm4" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.613344 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.614073 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.663955 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.665350 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.669301 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.669564 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.669662 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-glwnr" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.682395 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.699427 4903 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5113a647-ee24-4525-a000-a961bf6c50ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:56Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T17:16:56Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\\\",\\\"name\\\":\\\"combined-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbs58\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T17:16:56Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.704666 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 28 17:16:56 crc kubenswrapper[4903]: E0128 17:16:56.714081 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-jbs58 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-jbs58 openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="5113a647-ee24-4525-a000-a961bf6c50ee" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.725756 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.738022 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.739357 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.746194 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.764466 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5113a647-ee24-4525-a000-a961bf6c50ee" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.840567 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lll\" (UniqueName: \"kubernetes.io/projected/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-kube-api-access-v8lll\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.840935 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.840979 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.841050 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config-secret\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.942291 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config-secret\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.942404 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8lll\" (UniqueName: \"kubernetes.io/projected/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-kube-api-access-v8lll\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.942437 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.942474 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.943204 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.949180 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config-secret\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.949288 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:56 crc kubenswrapper[4903]: I0128 17:16:56.960817 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8lll\" (UniqueName: \"kubernetes.io/projected/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-kube-api-access-v8lll\") pod \"openstackclient\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " pod="openstack/openstackclient" Jan 28 17:16:57 crc kubenswrapper[4903]: I0128 17:16:57.062865 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:16:57 crc kubenswrapper[4903]: I0128 17:16:57.307013 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:16:57 crc kubenswrapper[4903]: I0128 17:16:57.311161 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5113a647-ee24-4525-a000-a961bf6c50ee" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" Jan 28 17:16:57 crc kubenswrapper[4903]: I0128 17:16:57.318163 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:16:57 crc kubenswrapper[4903]: I0128 17:16:57.321178 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5113a647-ee24-4525-a000-a961bf6c50ee" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" Jan 28 17:16:57 crc kubenswrapper[4903]: I0128 17:16:57.493274 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:16:57 crc kubenswrapper[4903]: W0128 17:16:57.502339 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96647ffd_a0c7_46f7_94f7_3ad08ae5de09.slice/crio-d3559c629968661d2258bea810e2ea31b36ca20d291a119dca70e099fd65f63d WatchSource:0}: Error finding container d3559c629968661d2258bea810e2ea31b36ca20d291a119dca70e099fd65f63d: Status 404 returned error can't find the container with id d3559c629968661d2258bea810e2ea31b36ca20d291a119dca70e099fd65f63d Jan 28 17:16:58 crc kubenswrapper[4903]: I0128 17:16:58.316109 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:16:58 crc kubenswrapper[4903]: I0128 17:16:58.316122 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"96647ffd-a0c7-46f7-94f7-3ad08ae5de09","Type":"ContainerStarted","Data":"b7d6438b08337c0639030e73a28b39c4fd9de920bd1f086a6698676889bc5677"} Jan 28 17:16:58 crc kubenswrapper[4903]: I0128 17:16:58.316430 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"96647ffd-a0c7-46f7-94f7-3ad08ae5de09","Type":"ContainerStarted","Data":"d3559c629968661d2258bea810e2ea31b36ca20d291a119dca70e099fd65f63d"} Jan 28 17:16:58 crc kubenswrapper[4903]: I0128 17:16:58.335160 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5113a647-ee24-4525-a000-a961bf6c50ee" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" Jan 28 17:16:58 crc kubenswrapper[4903]: I0128 17:16:58.335466 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.335452544 podStartE2EDuration="2.335452544s" podCreationTimestamp="2026-01-28 17:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:16:58.332593856 +0000 UTC m=+5490.608565377" watchObservedRunningTime="2026-01-28 17:16:58.335452544 +0000 UTC m=+5490.611424055" Jan 28 17:16:58 crc kubenswrapper[4903]: I0128 17:16:58.425571 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5113a647-ee24-4525-a000-a961bf6c50ee" path="/var/lib/kubelet/pods/5113a647-ee24-4525-a000-a961bf6c50ee/volumes" Jan 28 17:17:26 crc kubenswrapper[4903]: I0128 17:17:26.614026 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:17:26 crc kubenswrapper[4903]: I0128 17:17:26.616919 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:17:26 crc kubenswrapper[4903]: I0128 17:17:26.617007 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:17:26 crc kubenswrapper[4903]: I0128 17:17:26.618278 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7378b6481e12992f0a6ba3f03ca88e1ce24c2396b78c53e3ea7dd86651deb56a"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:17:26 crc kubenswrapper[4903]: I0128 17:17:26.618356 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://7378b6481e12992f0a6ba3f03ca88e1ce24c2396b78c53e3ea7dd86651deb56a" gracePeriod=600 Jan 28 17:17:27 crc kubenswrapper[4903]: I0128 17:17:27.605796 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="7378b6481e12992f0a6ba3f03ca88e1ce24c2396b78c53e3ea7dd86651deb56a" exitCode=0 Jan 28 17:17:27 crc kubenswrapper[4903]: I0128 17:17:27.605865 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"7378b6481e12992f0a6ba3f03ca88e1ce24c2396b78c53e3ea7dd86651deb56a"} Jan 28 17:17:27 crc kubenswrapper[4903]: I0128 17:17:27.606487 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce"} Jan 28 17:17:27 crc kubenswrapper[4903]: I0128 17:17:27.606516 4903 scope.go:117] "RemoveContainer" containerID="7d0099568db141305932182725453872e7855bbd2a54d571454a1232246c1df7" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.002817 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-fbkf8"] Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.004782 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.014494 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fbkf8"] Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.101997 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7bf8-account-create-update-x5n7k"] Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.103350 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.111018 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.112499 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7bf8-account-create-update-x5n7k"] Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.121321 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-operator-scripts\") pod \"barbican-db-create-fbkf8\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.121381 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrxdb\" (UniqueName: \"kubernetes.io/projected/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-kube-api-access-lrxdb\") pod \"barbican-db-create-fbkf8\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.223847 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmzz\" (UniqueName: \"kubernetes.io/projected/64aa9df3-905f-457d-ae9e-2bbff742fe60-kube-api-access-wlmzz\") pod \"barbican-7bf8-account-create-update-x5n7k\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.223922 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-operator-scripts\") pod \"barbican-db-create-fbkf8\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.223953 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrxdb\" (UniqueName: \"kubernetes.io/projected/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-kube-api-access-lrxdb\") pod \"barbican-db-create-fbkf8\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.224091 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64aa9df3-905f-457d-ae9e-2bbff742fe60-operator-scripts\") pod \"barbican-7bf8-account-create-update-x5n7k\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.224791 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-operator-scripts\") pod \"barbican-db-create-fbkf8\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.254357 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrxdb\" (UniqueName: \"kubernetes.io/projected/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-kube-api-access-lrxdb\") pod \"barbican-db-create-fbkf8\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.322781 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.326429 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmzz\" (UniqueName: \"kubernetes.io/projected/64aa9df3-905f-457d-ae9e-2bbff742fe60-kube-api-access-wlmzz\") pod \"barbican-7bf8-account-create-update-x5n7k\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.326552 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64aa9df3-905f-457d-ae9e-2bbff742fe60-operator-scripts\") pod \"barbican-7bf8-account-create-update-x5n7k\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.327731 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64aa9df3-905f-457d-ae9e-2bbff742fe60-operator-scripts\") pod \"barbican-7bf8-account-create-update-x5n7k\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.351286 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmzz\" (UniqueName: \"kubernetes.io/projected/64aa9df3-905f-457d-ae9e-2bbff742fe60-kube-api-access-wlmzz\") pod \"barbican-7bf8-account-create-update-x5n7k\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.421150 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.852949 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fbkf8"] Jan 28 17:18:38 crc kubenswrapper[4903]: I0128 17:18:38.955704 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7bf8-account-create-update-x5n7k"] Jan 28 17:18:38 crc kubenswrapper[4903]: W0128 17:18:38.959804 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64aa9df3_905f_457d_ae9e_2bbff742fe60.slice/crio-06e1004609ee7b16e6e72a486114f3b483c8ee5e1143ff6a89b97191540dbba1 WatchSource:0}: Error finding container 06e1004609ee7b16e6e72a486114f3b483c8ee5e1143ff6a89b97191540dbba1: Status 404 returned error can't find the container with id 06e1004609ee7b16e6e72a486114f3b483c8ee5e1143ff6a89b97191540dbba1 Jan 28 17:18:39 crc kubenswrapper[4903]: I0128 17:18:39.162303 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7bf8-account-create-update-x5n7k" event={"ID":"64aa9df3-905f-457d-ae9e-2bbff742fe60","Type":"ContainerStarted","Data":"d2ded43d112077a9afd63087140897f16e6b8ec3bd607d7c51c7473b317b8f4d"} Jan 28 17:18:39 crc kubenswrapper[4903]: I0128 17:18:39.162354 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7bf8-account-create-update-x5n7k" event={"ID":"64aa9df3-905f-457d-ae9e-2bbff742fe60","Type":"ContainerStarted","Data":"06e1004609ee7b16e6e72a486114f3b483c8ee5e1143ff6a89b97191540dbba1"} Jan 28 17:18:39 crc kubenswrapper[4903]: I0128 17:18:39.164585 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fbkf8" event={"ID":"9d2acc5e-acc6-4ea7-8212-927d3e2749fe","Type":"ContainerStarted","Data":"56f6fb1e8789284a6648a017b2dd592f659a02667222a2bcf1491cb3fa204da0"} Jan 28 17:18:39 crc kubenswrapper[4903]: I0128 17:18:39.164611 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fbkf8" event={"ID":"9d2acc5e-acc6-4ea7-8212-927d3e2749fe","Type":"ContainerStarted","Data":"f9202b41bb1a7712254f6e72fd7b6e0c149f24fea7da801888b91f00e50c17f3"} Jan 28 17:18:39 crc kubenswrapper[4903]: I0128 17:18:39.178433 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-7bf8-account-create-update-x5n7k" podStartSLOduration=1.178417777 podStartE2EDuration="1.178417777s" podCreationTimestamp="2026-01-28 17:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:39.175991461 +0000 UTC m=+5591.451962972" watchObservedRunningTime="2026-01-28 17:18:39.178417777 +0000 UTC m=+5591.454389288" Jan 28 17:18:39 crc kubenswrapper[4903]: I0128 17:18:39.195973 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-fbkf8" podStartSLOduration=2.195953513 podStartE2EDuration="2.195953513s" podCreationTimestamp="2026-01-28 17:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:39.191911463 +0000 UTC m=+5591.467882984" watchObservedRunningTime="2026-01-28 17:18:39.195953513 +0000 UTC m=+5591.471925024" Jan 28 17:18:40 crc kubenswrapper[4903]: I0128 17:18:40.173266 4903 generic.go:334] "Generic (PLEG): container finished" podID="64aa9df3-905f-457d-ae9e-2bbff742fe60" containerID="d2ded43d112077a9afd63087140897f16e6b8ec3bd607d7c51c7473b317b8f4d" exitCode=0 Jan 28 17:18:40 crc kubenswrapper[4903]: I0128 17:18:40.173347 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7bf8-account-create-update-x5n7k" event={"ID":"64aa9df3-905f-457d-ae9e-2bbff742fe60","Type":"ContainerDied","Data":"d2ded43d112077a9afd63087140897f16e6b8ec3bd607d7c51c7473b317b8f4d"} Jan 28 17:18:40 crc kubenswrapper[4903]: I0128 17:18:40.175432 4903 generic.go:334] "Generic (PLEG): container finished" podID="9d2acc5e-acc6-4ea7-8212-927d3e2749fe" containerID="56f6fb1e8789284a6648a017b2dd592f659a02667222a2bcf1491cb3fa204da0" exitCode=0 Jan 28 17:18:40 crc kubenswrapper[4903]: I0128 17:18:40.175506 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fbkf8" event={"ID":"9d2acc5e-acc6-4ea7-8212-927d3e2749fe","Type":"ContainerDied","Data":"56f6fb1e8789284a6648a017b2dd592f659a02667222a2bcf1491cb3fa204da0"} Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.560680 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.568146 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.685103 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64aa9df3-905f-457d-ae9e-2bbff742fe60-operator-scripts\") pod \"64aa9df3-905f-457d-ae9e-2bbff742fe60\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.685179 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlmzz\" (UniqueName: \"kubernetes.io/projected/64aa9df3-905f-457d-ae9e-2bbff742fe60-kube-api-access-wlmzz\") pod \"64aa9df3-905f-457d-ae9e-2bbff742fe60\" (UID: \"64aa9df3-905f-457d-ae9e-2bbff742fe60\") " Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.685371 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-operator-scripts\") pod \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.685393 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrxdb\" (UniqueName: \"kubernetes.io/projected/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-kube-api-access-lrxdb\") pod \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\" (UID: \"9d2acc5e-acc6-4ea7-8212-927d3e2749fe\") " Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.688718 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64aa9df3-905f-457d-ae9e-2bbff742fe60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64aa9df3-905f-457d-ae9e-2bbff742fe60" (UID: "64aa9df3-905f-457d-ae9e-2bbff742fe60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.688974 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d2acc5e-acc6-4ea7-8212-927d3e2749fe" (UID: "9d2acc5e-acc6-4ea7-8212-927d3e2749fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.696207 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4zsqm"] Jan 28 17:18:41 crc kubenswrapper[4903]: E0128 17:18:41.696642 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64aa9df3-905f-457d-ae9e-2bbff742fe60" containerName="mariadb-account-create-update" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.696667 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64aa9df3-905f-457d-ae9e-2bbff742fe60" containerName="mariadb-account-create-update" Jan 28 17:18:41 crc kubenswrapper[4903]: E0128 17:18:41.696706 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2acc5e-acc6-4ea7-8212-927d3e2749fe" containerName="mariadb-database-create" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.696714 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2acc5e-acc6-4ea7-8212-927d3e2749fe" containerName="mariadb-database-create" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.699262 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2acc5e-acc6-4ea7-8212-927d3e2749fe" containerName="mariadb-database-create" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.699341 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="64aa9df3-905f-457d-ae9e-2bbff742fe60" containerName="mariadb-account-create-update" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.698543 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64aa9df3-905f-457d-ae9e-2bbff742fe60-kube-api-access-wlmzz" (OuterVolumeSpecName: "kube-api-access-wlmzz") pod "64aa9df3-905f-457d-ae9e-2bbff742fe60" (UID: "64aa9df3-905f-457d-ae9e-2bbff742fe60"). InnerVolumeSpecName "kube-api-access-wlmzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.701325 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.704327 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-kube-api-access-lrxdb" (OuterVolumeSpecName: "kube-api-access-lrxdb") pod "9d2acc5e-acc6-4ea7-8212-927d3e2749fe" (UID: "9d2acc5e-acc6-4ea7-8212-927d3e2749fe"). InnerVolumeSpecName "kube-api-access-lrxdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.706664 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zsqm"] Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.786949 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfmg\" (UniqueName: \"kubernetes.io/projected/86168cbe-c0fc-4436-bee5-01d30c25884a-kube-api-access-zwfmg\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.786989 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-catalog-content\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.787086 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-utilities\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.787156 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.787173 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrxdb\" (UniqueName: \"kubernetes.io/projected/9d2acc5e-acc6-4ea7-8212-927d3e2749fe-kube-api-access-lrxdb\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.787183 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64aa9df3-905f-457d-ae9e-2bbff742fe60-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.787191 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlmzz\" (UniqueName: \"kubernetes.io/projected/64aa9df3-905f-457d-ae9e-2bbff742fe60-kube-api-access-wlmzz\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.891564 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwfmg\" (UniqueName: \"kubernetes.io/projected/86168cbe-c0fc-4436-bee5-01d30c25884a-kube-api-access-zwfmg\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.891640 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-catalog-content\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.891750 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-utilities\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.892356 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-catalog-content\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.892421 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-utilities\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:41 crc kubenswrapper[4903]: I0128 17:18:41.915208 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwfmg\" (UniqueName: \"kubernetes.io/projected/86168cbe-c0fc-4436-bee5-01d30c25884a-kube-api-access-zwfmg\") pod \"redhat-marketplace-4zsqm\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.091131 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.202259 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fbkf8" event={"ID":"9d2acc5e-acc6-4ea7-8212-927d3e2749fe","Type":"ContainerDied","Data":"f9202b41bb1a7712254f6e72fd7b6e0c149f24fea7da801888b91f00e50c17f3"} Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.202301 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9202b41bb1a7712254f6e72fd7b6e0c149f24fea7da801888b91f00e50c17f3" Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.202322 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fbkf8" Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.203925 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7bf8-account-create-update-x5n7k" event={"ID":"64aa9df3-905f-457d-ae9e-2bbff742fe60","Type":"ContainerDied","Data":"06e1004609ee7b16e6e72a486114f3b483c8ee5e1143ff6a89b97191540dbba1"} Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.203951 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e1004609ee7b16e6e72a486114f3b483c8ee5e1143ff6a89b97191540dbba1" Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.203999 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bf8-account-create-update-x5n7k" Jan 28 17:18:42 crc kubenswrapper[4903]: I0128 17:18:42.563757 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zsqm"] Jan 28 17:18:42 crc kubenswrapper[4903]: W0128 17:18:42.565475 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86168cbe_c0fc_4436_bee5_01d30c25884a.slice/crio-2c020adf60b68a9fa397716378a851874a13855a35c5620a8282ac53af30311f WatchSource:0}: Error finding container 2c020adf60b68a9fa397716378a851874a13855a35c5620a8282ac53af30311f: Status 404 returned error can't find the container with id 2c020adf60b68a9fa397716378a851874a13855a35c5620a8282ac53af30311f Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.217124 4903 generic.go:334] "Generic (PLEG): container finished" podID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerID="cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4" exitCode=0 Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.217190 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zsqm" event={"ID":"86168cbe-c0fc-4436-bee5-01d30c25884a","Type":"ContainerDied","Data":"cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4"} Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.217260 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zsqm" event={"ID":"86168cbe-c0fc-4436-bee5-01d30c25884a","Type":"ContainerStarted","Data":"2c020adf60b68a9fa397716378a851874a13855a35c5620a8282ac53af30311f"} Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.452512 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xhsp5"] Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.454068 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.465324 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xhsp5"] Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.488741 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.488741 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h52b9" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.516065 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-combined-ca-bundle\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.516130 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkm59\" (UniqueName: \"kubernetes.io/projected/13adc33f-5819-4774-82e5-eefd361bd22c-kube-api-access-tkm59\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.516187 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-db-sync-config-data\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.617999 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-combined-ca-bundle\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.618051 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkm59\" (UniqueName: \"kubernetes.io/projected/13adc33f-5819-4774-82e5-eefd361bd22c-kube-api-access-tkm59\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.618094 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-db-sync-config-data\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.624655 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-combined-ca-bundle\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.636624 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-db-sync-config-data\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.637696 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkm59\" (UniqueName: \"kubernetes.io/projected/13adc33f-5819-4774-82e5-eefd361bd22c-kube-api-access-tkm59\") pod \"barbican-db-sync-xhsp5\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:43 crc kubenswrapper[4903]: I0128 17:18:43.801764 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:44 crc kubenswrapper[4903]: I0128 17:18:44.226656 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zsqm" event={"ID":"86168cbe-c0fc-4436-bee5-01d30c25884a","Type":"ContainerStarted","Data":"db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76"} Jan 28 17:18:44 crc kubenswrapper[4903]: I0128 17:18:44.245549 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xhsp5"] Jan 28 17:18:45 crc kubenswrapper[4903]: I0128 17:18:45.236028 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xhsp5" event={"ID":"13adc33f-5819-4774-82e5-eefd361bd22c","Type":"ContainerStarted","Data":"808bf977ddaa90666b89a675ac9bffc1c6ae565cb10e01b70a556a566c321959"} Jan 28 17:18:45 crc kubenswrapper[4903]: I0128 17:18:45.236912 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xhsp5" event={"ID":"13adc33f-5819-4774-82e5-eefd361bd22c","Type":"ContainerStarted","Data":"4380d2e247c0abf83da19608738ef49a6c68c98109a2eaabe9d2c1a6af90d941"} Jan 28 17:18:45 crc kubenswrapper[4903]: I0128 17:18:45.239279 4903 generic.go:334] "Generic (PLEG): container finished" podID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerID="db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76" exitCode=0 Jan 28 17:18:45 crc kubenswrapper[4903]: I0128 17:18:45.239365 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zsqm" event={"ID":"86168cbe-c0fc-4436-bee5-01d30c25884a","Type":"ContainerDied","Data":"db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76"} Jan 28 17:18:45 crc kubenswrapper[4903]: I0128 17:18:45.253177 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xhsp5" podStartSLOduration=2.253161848 podStartE2EDuration="2.253161848s" podCreationTimestamp="2026-01-28 17:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:45.251854213 +0000 UTC m=+5597.527825714" watchObservedRunningTime="2026-01-28 17:18:45.253161848 +0000 UTC m=+5597.529133359" Jan 28 17:18:46 crc kubenswrapper[4903]: I0128 17:18:46.248631 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zsqm" event={"ID":"86168cbe-c0fc-4436-bee5-01d30c25884a","Type":"ContainerStarted","Data":"19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102"} Jan 28 17:18:46 crc kubenswrapper[4903]: I0128 17:18:46.269854 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4zsqm" podStartSLOduration=2.621015354 podStartE2EDuration="5.269827295s" podCreationTimestamp="2026-01-28 17:18:41 +0000 UTC" firstStartedPulling="2026-01-28 17:18:43.219414151 +0000 UTC m=+5595.495385682" lastFinishedPulling="2026-01-28 17:18:45.868226112 +0000 UTC m=+5598.144197623" observedRunningTime="2026-01-28 17:18:46.265119367 +0000 UTC m=+5598.541090878" watchObservedRunningTime="2026-01-28 17:18:46.269827295 +0000 UTC m=+5598.545798806" Jan 28 17:18:48 crc kubenswrapper[4903]: I0128 17:18:48.272041 4903 generic.go:334] "Generic (PLEG): container finished" podID="13adc33f-5819-4774-82e5-eefd361bd22c" containerID="808bf977ddaa90666b89a675ac9bffc1c6ae565cb10e01b70a556a566c321959" exitCode=0 Jan 28 17:18:48 crc kubenswrapper[4903]: I0128 17:18:48.272113 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xhsp5" event={"ID":"13adc33f-5819-4774-82e5-eefd361bd22c","Type":"ContainerDied","Data":"808bf977ddaa90666b89a675ac9bffc1c6ae565cb10e01b70a556a566c321959"} Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.613067 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.722915 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-combined-ca-bundle\") pod \"13adc33f-5819-4774-82e5-eefd361bd22c\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.723015 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkm59\" (UniqueName: \"kubernetes.io/projected/13adc33f-5819-4774-82e5-eefd361bd22c-kube-api-access-tkm59\") pod \"13adc33f-5819-4774-82e5-eefd361bd22c\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.723126 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-db-sync-config-data\") pod \"13adc33f-5819-4774-82e5-eefd361bd22c\" (UID: \"13adc33f-5819-4774-82e5-eefd361bd22c\") " Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.728185 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "13adc33f-5819-4774-82e5-eefd361bd22c" (UID: "13adc33f-5819-4774-82e5-eefd361bd22c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.728379 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13adc33f-5819-4774-82e5-eefd361bd22c-kube-api-access-tkm59" (OuterVolumeSpecName: "kube-api-access-tkm59") pod "13adc33f-5819-4774-82e5-eefd361bd22c" (UID: "13adc33f-5819-4774-82e5-eefd361bd22c"). InnerVolumeSpecName "kube-api-access-tkm59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.744831 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13adc33f-5819-4774-82e5-eefd361bd22c" (UID: "13adc33f-5819-4774-82e5-eefd361bd22c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.826024 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.826382 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkm59\" (UniqueName: \"kubernetes.io/projected/13adc33f-5819-4774-82e5-eefd361bd22c-kube-api-access-tkm59\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:49 crc kubenswrapper[4903]: I0128 17:18:49.826398 4903 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13adc33f-5819-4774-82e5-eefd361bd22c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.302296 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xhsp5" event={"ID":"13adc33f-5819-4774-82e5-eefd361bd22c","Type":"ContainerDied","Data":"4380d2e247c0abf83da19608738ef49a6c68c98109a2eaabe9d2c1a6af90d941"} Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.302360 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4380d2e247c0abf83da19608738ef49a6c68c98109a2eaabe9d2c1a6af90d941" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.302362 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xhsp5" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.546674 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5dd9d8488-fx8r9"] Jan 28 17:18:50 crc kubenswrapper[4903]: E0128 17:18:50.547143 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13adc33f-5819-4774-82e5-eefd361bd22c" containerName="barbican-db-sync" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.547173 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="13adc33f-5819-4774-82e5-eefd361bd22c" containerName="barbican-db-sync" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.547359 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="13adc33f-5819-4774-82e5-eefd361bd22c" containerName="barbican-db-sync" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.548433 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.551815 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.552028 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h52b9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.555828 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.570911 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dd9d8488-fx8r9"] Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.582001 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9fd448cc5-6v2h7"] Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.589972 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.595410 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.632612 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9fd448cc5-6v2h7"] Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.638884 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgqx2\" (UniqueName: \"kubernetes.io/projected/a2154297-8ef5-4036-949d-f44e7bbd247e-kube-api-access-xgqx2\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.639181 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-config-data\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.639392 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrx8\" (UniqueName: \"kubernetes.io/projected/bf07e37a-f4ad-4247-8c53-3aebf007c02a-kube-api-access-prrx8\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.639518 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-config-data\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.639695 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf07e37a-f4ad-4247-8c53-3aebf007c02a-logs\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.639915 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-combined-ca-bundle\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.640031 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-config-data-custom\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.640201 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2154297-8ef5-4036-949d-f44e7bbd247e-logs\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.640323 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.640483 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-config-data-custom\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.655364 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-746c85cf5f-cc6xg"] Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.657095 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.675653 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-746c85cf5f-cc6xg"] Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.731692 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77c76bd4d4-k9f4j"] Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.733666 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.737036 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.747793 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzkn\" (UniqueName: \"kubernetes.io/projected/6768af8e-8766-42db-95dd-802258413317-kube-api-access-2xzkn\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.747842 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrx8\" (UniqueName: \"kubernetes.io/projected/bf07e37a-f4ad-4247-8c53-3aebf007c02a-kube-api-access-prrx8\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.747875 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-config-data\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.747893 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf07e37a-f4ad-4247-8c53-3aebf007c02a-logs\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.747922 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-config\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.747983 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-sb\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.748035 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-combined-ca-bundle\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.748057 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-config-data-custom\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.748084 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2154297-8ef5-4036-949d-f44e7bbd247e-logs\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.748103 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.748135 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-dns-svc\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.748154 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-config-data-custom\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.748170 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-nb\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.753120 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgqx2\" (UniqueName: \"kubernetes.io/projected/a2154297-8ef5-4036-949d-f44e7bbd247e-kube-api-access-xgqx2\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.753186 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-config-data\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.756501 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf07e37a-f4ad-4247-8c53-3aebf007c02a-logs\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.758079 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2154297-8ef5-4036-949d-f44e7bbd247e-logs\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.772564 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-config-data-custom\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.774142 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-combined-ca-bundle\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.778105 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-config-data-custom\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.779339 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.782202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2154297-8ef5-4036-949d-f44e7bbd247e-config-data\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.787309 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrx8\" (UniqueName: \"kubernetes.io/projected/bf07e37a-f4ad-4247-8c53-3aebf007c02a-kube-api-access-prrx8\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.787758 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgqx2\" (UniqueName: \"kubernetes.io/projected/a2154297-8ef5-4036-949d-f44e7bbd247e-kube-api-access-xgqx2\") pod \"barbican-keystone-listener-5dd9d8488-fx8r9\" (UID: \"a2154297-8ef5-4036-949d-f44e7bbd247e\") " pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.788082 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf07e37a-f4ad-4247-8c53-3aebf007c02a-config-data\") pod \"barbican-worker-9fd448cc5-6v2h7\" (UID: \"bf07e37a-f4ad-4247-8c53-3aebf007c02a\") " pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.804794 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c76bd4d4-k9f4j"] Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.854947 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data-custom\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855341 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xzkn\" (UniqueName: \"kubernetes.io/projected/6768af8e-8766-42db-95dd-802258413317-kube-api-access-2xzkn\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855391 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-config\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855419 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kt6\" (UniqueName: \"kubernetes.io/projected/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-kube-api-access-z9kt6\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855444 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-combined-ca-bundle\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855478 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855515 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-logs\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855566 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-sb\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855856 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-dns-svc\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.855928 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-nb\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.857714 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-nb\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.860319 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-dns-svc\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.860995 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-config\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.861183 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-sb\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.872106 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.882044 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xzkn\" (UniqueName: \"kubernetes.io/projected/6768af8e-8766-42db-95dd-802258413317-kube-api-access-2xzkn\") pod \"dnsmasq-dns-746c85cf5f-cc6xg\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.907288 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9fd448cc5-6v2h7" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.957047 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data-custom\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.957156 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9kt6\" (UniqueName: \"kubernetes.io/projected/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-kube-api-access-z9kt6\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.957183 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-combined-ca-bundle\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.957222 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.957254 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-logs\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.957761 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-logs\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.963027 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.963479 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data-custom\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.965294 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-combined-ca-bundle\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.989155 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:50 crc kubenswrapper[4903]: I0128 17:18:50.990421 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9kt6\" (UniqueName: \"kubernetes.io/projected/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-kube-api-access-z9kt6\") pod \"barbican-api-77c76bd4d4-k9f4j\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:51 crc kubenswrapper[4903]: I0128 17:18:51.058070 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:51 crc kubenswrapper[4903]: I0128 17:18:51.406034 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dd9d8488-fx8r9"] Jan 28 17:18:51 crc kubenswrapper[4903]: I0128 17:18:51.528736 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9fd448cc5-6v2h7"] Jan 28 17:18:51 crc kubenswrapper[4903]: W0128 17:18:51.538389 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf07e37a_f4ad_4247_8c53_3aebf007c02a.slice/crio-3cc2b4a260eff0602f70920cd045f9459e3786322c4d3459471ee3f31c0f1cfa WatchSource:0}: Error finding container 3cc2b4a260eff0602f70920cd045f9459e3786322c4d3459471ee3f31c0f1cfa: Status 404 returned error can't find the container with id 3cc2b4a260eff0602f70920cd045f9459e3786322c4d3459471ee3f31c0f1cfa Jan 28 17:18:51 crc kubenswrapper[4903]: I0128 17:18:51.600186 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-746c85cf5f-cc6xg"] Jan 28 17:18:51 crc kubenswrapper[4903]: W0128 17:18:51.618168 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6768af8e_8766_42db_95dd_802258413317.slice/crio-facd10890e6818896f5f8792225c983b19bba19ebc3918ecfbf8fd4d30ce44bb WatchSource:0}: Error finding container facd10890e6818896f5f8792225c983b19bba19ebc3918ecfbf8fd4d30ce44bb: Status 404 returned error can't find the container with id facd10890e6818896f5f8792225c983b19bba19ebc3918ecfbf8fd4d30ce44bb Jan 28 17:18:51 crc kubenswrapper[4903]: I0128 17:18:51.697053 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77c76bd4d4-k9f4j"] Jan 28 17:18:51 crc kubenswrapper[4903]: W0128 17:18:51.703051 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97095902_7fa9_4b3e_8b9e_db2b49cdc8b6.slice/crio-ea460d072164c6db465d0b389526b17eb11457937a11caefda60df825cbd080c WatchSource:0}: Error finding container ea460d072164c6db465d0b389526b17eb11457937a11caefda60df825cbd080c: Status 404 returned error can't find the container with id ea460d072164c6db465d0b389526b17eb11457937a11caefda60df825cbd080c Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.092362 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.092463 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.146903 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.319487 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" event={"ID":"a2154297-8ef5-4036-949d-f44e7bbd247e","Type":"ContainerStarted","Data":"8cad98bff122d218270f2c1489001fdd1a58d3688a041e32f0c82d045d064305"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.319530 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" event={"ID":"a2154297-8ef5-4036-949d-f44e7bbd247e","Type":"ContainerStarted","Data":"a2c0f6cb800646b52d99751dc76443fd6e298cdd7795a492eb25ffddf725efeb"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.319557 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" event={"ID":"a2154297-8ef5-4036-949d-f44e7bbd247e","Type":"ContainerStarted","Data":"e3b5672342b8684a2fcb50d8f3b2489b5fc9837ec0d2377236aba87fcd708a6e"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.322114 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c76bd4d4-k9f4j" event={"ID":"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6","Type":"ContainerStarted","Data":"49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.322150 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c76bd4d4-k9f4j" event={"ID":"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6","Type":"ContainerStarted","Data":"ea460d072164c6db465d0b389526b17eb11457937a11caefda60df825cbd080c"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.328499 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9fd448cc5-6v2h7" event={"ID":"bf07e37a-f4ad-4247-8c53-3aebf007c02a","Type":"ContainerStarted","Data":"7e9c85befb4a28f353de3904a462a49726eb0356a1444323fcf7c30ba94392f1"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.328566 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9fd448cc5-6v2h7" event={"ID":"bf07e37a-f4ad-4247-8c53-3aebf007c02a","Type":"ContainerStarted","Data":"908f2e18f0e7738e63bc5d97277aa2a131328ca96bff2a1db14dc45eeb631f99"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.328580 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9fd448cc5-6v2h7" event={"ID":"bf07e37a-f4ad-4247-8c53-3aebf007c02a","Type":"ContainerStarted","Data":"3cc2b4a260eff0602f70920cd045f9459e3786322c4d3459471ee3f31c0f1cfa"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.331617 4903 generic.go:334] "Generic (PLEG): container finished" podID="6768af8e-8766-42db-95dd-802258413317" containerID="a9d6924bc2d76fbeb685535a61ab1ccdd5728d5c6768be5b0ceb3bdd135abc8f" exitCode=0 Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.332993 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" event={"ID":"6768af8e-8766-42db-95dd-802258413317","Type":"ContainerDied","Data":"a9d6924bc2d76fbeb685535a61ab1ccdd5728d5c6768be5b0ceb3bdd135abc8f"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.333048 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" event={"ID":"6768af8e-8766-42db-95dd-802258413317","Type":"ContainerStarted","Data":"facd10890e6818896f5f8792225c983b19bba19ebc3918ecfbf8fd4d30ce44bb"} Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.359929 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5dd9d8488-fx8r9" podStartSLOduration=2.359901121 podStartE2EDuration="2.359901121s" podCreationTimestamp="2026-01-28 17:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:52.338245274 +0000 UTC m=+5604.614216785" watchObservedRunningTime="2026-01-28 17:18:52.359901121 +0000 UTC m=+5604.635872632" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.383362 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9fd448cc5-6v2h7" podStartSLOduration=2.383337767 podStartE2EDuration="2.383337767s" podCreationTimestamp="2026-01-28 17:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:52.363825198 +0000 UTC m=+5604.639796699" watchObservedRunningTime="2026-01-28 17:18:52.383337767 +0000 UTC m=+5604.659309288" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.426023 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.508132 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zsqm"] Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.800112 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64fff85c58-9rqhr"] Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.801490 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.804364 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.804489 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.820261 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64fff85c58-9rqhr"] Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.893812 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-combined-ca-bundle\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.893872 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-config-data-custom\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.893932 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9ff\" (UniqueName: \"kubernetes.io/projected/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-kube-api-access-wc9ff\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.893977 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-logs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.894050 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-public-tls-certs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.894195 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-config-data\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.894232 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-internal-tls-certs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.995964 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-config-data\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.996008 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-internal-tls-certs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.996054 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-combined-ca-bundle\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.996070 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-config-data-custom\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.996096 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9ff\" (UniqueName: \"kubernetes.io/projected/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-kube-api-access-wc9ff\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.996124 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-logs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.996162 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-public-tls-certs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:52 crc kubenswrapper[4903]: I0128 17:18:52.997296 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-logs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.001837 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-public-tls-certs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.003064 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-combined-ca-bundle\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.005712 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-config-data\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.008257 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-config-data-custom\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.009458 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-internal-tls-certs\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.065105 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9ff\" (UniqueName: \"kubernetes.io/projected/22b4bb45-f2b3-4b24-a562-2f044b5adfdd-kube-api-access-wc9ff\") pod \"barbican-api-64fff85c58-9rqhr\" (UID: \"22b4bb45-f2b3-4b24-a562-2f044b5adfdd\") " pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.140251 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.348383 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c76bd4d4-k9f4j" event={"ID":"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6","Type":"ContainerStarted","Data":"e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c"} Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.350107 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.350190 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.354325 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" event={"ID":"6768af8e-8766-42db-95dd-802258413317","Type":"ContainerStarted","Data":"52e35fb21df20d0136d93a5ea43e22dc5ac41a80bf42076d9d2fd67c2e7681d6"} Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.355412 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.383773 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podStartSLOduration=3.383662632 podStartE2EDuration="3.383662632s" podCreationTimestamp="2026-01-28 17:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:53.373106385 +0000 UTC m=+5605.649077896" watchObservedRunningTime="2026-01-28 17:18:53.383662632 +0000 UTC m=+5605.659634143" Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.406736 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" podStartSLOduration=3.406722537 podStartE2EDuration="3.406722537s" podCreationTimestamp="2026-01-28 17:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:53.397009043 +0000 UTC m=+5605.672980564" watchObservedRunningTime="2026-01-28 17:18:53.406722537 +0000 UTC m=+5605.682694038" Jan 28 17:18:53 crc kubenswrapper[4903]: W0128 17:18:53.662943 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22b4bb45_f2b3_4b24_a562_2f044b5adfdd.slice/crio-6c2a1e3d6ff8f385365927de2a292fc5bb3c7232dbb10d6363f6db83ba8443ff WatchSource:0}: Error finding container 6c2a1e3d6ff8f385365927de2a292fc5bb3c7232dbb10d6363f6db83ba8443ff: Status 404 returned error can't find the container with id 6c2a1e3d6ff8f385365927de2a292fc5bb3c7232dbb10d6363f6db83ba8443ff Jan 28 17:18:53 crc kubenswrapper[4903]: I0128 17:18:53.663961 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64fff85c58-9rqhr"] Jan 28 17:18:54 crc kubenswrapper[4903]: I0128 17:18:54.362484 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fff85c58-9rqhr" event={"ID":"22b4bb45-f2b3-4b24-a562-2f044b5adfdd","Type":"ContainerStarted","Data":"e69c07d9ed8b0d5c04f6754976cf9d0bbf2267d4ba32f21c280ec1119ef7ea0c"} Jan 28 17:18:54 crc kubenswrapper[4903]: I0128 17:18:54.363793 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fff85c58-9rqhr" event={"ID":"22b4bb45-f2b3-4b24-a562-2f044b5adfdd","Type":"ContainerStarted","Data":"4c1fbc48f23e0cb8602d22ea09082d9f4219a03dc80d43ccf55cd14d59baffc0"} Jan 28 17:18:54 crc kubenswrapper[4903]: I0128 17:18:54.363876 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64fff85c58-9rqhr" event={"ID":"22b4bb45-f2b3-4b24-a562-2f044b5adfdd","Type":"ContainerStarted","Data":"6c2a1e3d6ff8f385365927de2a292fc5bb3c7232dbb10d6363f6db83ba8443ff"} Jan 28 17:18:54 crc kubenswrapper[4903]: I0128 17:18:54.362669 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4zsqm" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="registry-server" containerID="cri-o://19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102" gracePeriod=2 Jan 28 17:18:54 crc kubenswrapper[4903]: I0128 17:18:54.392832 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64fff85c58-9rqhr" podStartSLOduration=2.392814945 podStartE2EDuration="2.392814945s" podCreationTimestamp="2026-01-28 17:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:18:54.389506295 +0000 UTC m=+5606.665477836" watchObservedRunningTime="2026-01-28 17:18:54.392814945 +0000 UTC m=+5606.668786456" Jan 28 17:18:54 crc kubenswrapper[4903]: I0128 17:18:54.966773 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.075435 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-utilities\") pod \"86168cbe-c0fc-4436-bee5-01d30c25884a\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.075649 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-catalog-content\") pod \"86168cbe-c0fc-4436-bee5-01d30c25884a\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.075692 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwfmg\" (UniqueName: \"kubernetes.io/projected/86168cbe-c0fc-4436-bee5-01d30c25884a-kube-api-access-zwfmg\") pod \"86168cbe-c0fc-4436-bee5-01d30c25884a\" (UID: \"86168cbe-c0fc-4436-bee5-01d30c25884a\") " Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.076773 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-utilities" (OuterVolumeSpecName: "utilities") pod "86168cbe-c0fc-4436-bee5-01d30c25884a" (UID: "86168cbe-c0fc-4436-bee5-01d30c25884a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.081572 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86168cbe-c0fc-4436-bee5-01d30c25884a-kube-api-access-zwfmg" (OuterVolumeSpecName: "kube-api-access-zwfmg") pod "86168cbe-c0fc-4436-bee5-01d30c25884a" (UID: "86168cbe-c0fc-4436-bee5-01d30c25884a"). InnerVolumeSpecName "kube-api-access-zwfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.098648 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86168cbe-c0fc-4436-bee5-01d30c25884a" (UID: "86168cbe-c0fc-4436-bee5-01d30c25884a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.178102 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.178144 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwfmg\" (UniqueName: \"kubernetes.io/projected/86168cbe-c0fc-4436-bee5-01d30c25884a-kube-api-access-zwfmg\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.178157 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86168cbe-c0fc-4436-bee5-01d30c25884a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.372456 4903 generic.go:334] "Generic (PLEG): container finished" podID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerID="19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102" exitCode=0 Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.372517 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zsqm" event={"ID":"86168cbe-c0fc-4436-bee5-01d30c25884a","Type":"ContainerDied","Data":"19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102"} Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.372629 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4zsqm" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.373022 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4zsqm" event={"ID":"86168cbe-c0fc-4436-bee5-01d30c25884a","Type":"ContainerDied","Data":"2c020adf60b68a9fa397716378a851874a13855a35c5620a8282ac53af30311f"} Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.373056 4903 scope.go:117] "RemoveContainer" containerID="19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.373754 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.373778 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.402821 4903 scope.go:117] "RemoveContainer" containerID="db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.413610 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zsqm"] Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.427097 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4zsqm"] Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.427390 4903 scope.go:117] "RemoveContainer" containerID="cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.471906 4903 scope.go:117] "RemoveContainer" containerID="19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102" Jan 28 17:18:55 crc kubenswrapper[4903]: E0128 17:18:55.472429 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102\": container with ID starting with 19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102 not found: ID does not exist" containerID="19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.472473 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102"} err="failed to get container status \"19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102\": rpc error: code = NotFound desc = could not find container \"19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102\": container with ID starting with 19beeb78587417c01efc1fa3e66c3ccb08a3541b0864928531f385d412c1d102 not found: ID does not exist" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.472497 4903 scope.go:117] "RemoveContainer" containerID="db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76" Jan 28 17:18:55 crc kubenswrapper[4903]: E0128 17:18:55.473011 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76\": container with ID starting with db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76 not found: ID does not exist" containerID="db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.473098 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76"} err="failed to get container status \"db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76\": rpc error: code = NotFound desc = could not find container \"db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76\": container with ID starting with db594e0300611ef7ff3c7b02c2d1f52048ab5d76251052e6c45f80fa80434d76 not found: ID does not exist" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.473132 4903 scope.go:117] "RemoveContainer" containerID="cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4" Jan 28 17:18:55 crc kubenswrapper[4903]: E0128 17:18:55.473484 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4\": container with ID starting with cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4 not found: ID does not exist" containerID="cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4" Jan 28 17:18:55 crc kubenswrapper[4903]: I0128 17:18:55.473517 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4"} err="failed to get container status \"cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4\": rpc error: code = NotFound desc = could not find container \"cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4\": container with ID starting with cb3c13b1ee656903a8b861702a09ad1c49aad0235d94bb1f55776b085ec670a4 not found: ID does not exist" Jan 28 17:18:56 crc kubenswrapper[4903]: I0128 17:18:56.425256 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" path="/var/lib/kubelet/pods/86168cbe-c0fc-4436-bee5-01d30c25884a/volumes" Jan 28 17:18:58 crc kubenswrapper[4903]: I0128 17:18:58.787374 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 17:18:59 crc kubenswrapper[4903]: I0128 17:18:59.683922 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:19:00 crc kubenswrapper[4903]: I0128 17:19:00.991429 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.059350 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f555c9df-76tds"] Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.059785 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75f555c9df-76tds" podUID="64310972-c89b-4d07-b959-e7ab26705cd3" containerName="dnsmasq-dns" containerID="cri-o://cdc4f47bab6ab45acf12f3fbec5673bc94fd40c66f5d514323a7b647ac99b657" gracePeriod=10 Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.223672 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64fff85c58-9rqhr" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.295997 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-77c76bd4d4-k9f4j"] Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.296323 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" containerID="cri-o://49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace" gracePeriod=30 Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.296801 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" containerID="cri-o://e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c" gracePeriod=30 Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.313835 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": EOF" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.313835 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": EOF" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.313994 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": EOF" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.314473 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": EOF" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.446114 4903 generic.go:334] "Generic (PLEG): container finished" podID="64310972-c89b-4d07-b959-e7ab26705cd3" containerID="cdc4f47bab6ab45acf12f3fbec5673bc94fd40c66f5d514323a7b647ac99b657" exitCode=0 Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.446164 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f555c9df-76tds" event={"ID":"64310972-c89b-4d07-b959-e7ab26705cd3","Type":"ContainerDied","Data":"cdc4f47bab6ab45acf12f3fbec5673bc94fd40c66f5d514323a7b647ac99b657"} Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.655665 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.800177 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-dns-svc\") pod \"64310972-c89b-4d07-b959-e7ab26705cd3\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.800297 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-config\") pod \"64310972-c89b-4d07-b959-e7ab26705cd3\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.800360 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmkp2\" (UniqueName: \"kubernetes.io/projected/64310972-c89b-4d07-b959-e7ab26705cd3-kube-api-access-fmkp2\") pod \"64310972-c89b-4d07-b959-e7ab26705cd3\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.800445 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-nb\") pod \"64310972-c89b-4d07-b959-e7ab26705cd3\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.800569 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-sb\") pod \"64310972-c89b-4d07-b959-e7ab26705cd3\" (UID: \"64310972-c89b-4d07-b959-e7ab26705cd3\") " Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.808334 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64310972-c89b-4d07-b959-e7ab26705cd3-kube-api-access-fmkp2" (OuterVolumeSpecName: "kube-api-access-fmkp2") pod "64310972-c89b-4d07-b959-e7ab26705cd3" (UID: "64310972-c89b-4d07-b959-e7ab26705cd3"). InnerVolumeSpecName "kube-api-access-fmkp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.846984 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-config" (OuterVolumeSpecName: "config") pod "64310972-c89b-4d07-b959-e7ab26705cd3" (UID: "64310972-c89b-4d07-b959-e7ab26705cd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.847110 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64310972-c89b-4d07-b959-e7ab26705cd3" (UID: "64310972-c89b-4d07-b959-e7ab26705cd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.861053 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64310972-c89b-4d07-b959-e7ab26705cd3" (UID: "64310972-c89b-4d07-b959-e7ab26705cd3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.865428 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64310972-c89b-4d07-b959-e7ab26705cd3" (UID: "64310972-c89b-4d07-b959-e7ab26705cd3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.902310 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.902351 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.902360 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.902370 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmkp2\" (UniqueName: \"kubernetes.io/projected/64310972-c89b-4d07-b959-e7ab26705cd3-kube-api-access-fmkp2\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:01 crc kubenswrapper[4903]: I0128 17:19:01.902379 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64310972-c89b-4d07-b959-e7ab26705cd3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.456167 4903 generic.go:334] "Generic (PLEG): container finished" podID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerID="49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace" exitCode=143 Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.456285 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c76bd4d4-k9f4j" event={"ID":"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6","Type":"ContainerDied","Data":"49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace"} Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.459301 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75f555c9df-76tds" event={"ID":"64310972-c89b-4d07-b959-e7ab26705cd3","Type":"ContainerDied","Data":"b95f89212ea3d3b28fff2f72d7f7909e8ccbd55709d26c6e9c030c8da53b8df0"} Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.459368 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75f555c9df-76tds" Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.459375 4903 scope.go:117] "RemoveContainer" containerID="cdc4f47bab6ab45acf12f3fbec5673bc94fd40c66f5d514323a7b647ac99b657" Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.491436 4903 scope.go:117] "RemoveContainer" containerID="f6ce6b68fd652cf13a492c0f75d08c9ed98dde01853267db79d37f5b3a606352" Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.499001 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75f555c9df-76tds"] Jan 28 17:19:02 crc kubenswrapper[4903]: I0128 17:19:02.507380 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75f555c9df-76tds"] Jan 28 17:19:04 crc kubenswrapper[4903]: I0128 17:19:04.425617 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64310972-c89b-4d07-b959-e7ab26705cd3" path="/var/lib/kubelet/pods/64310972-c89b-4d07-b959-e7ab26705cd3/volumes" Jan 28 17:19:06 crc kubenswrapper[4903]: I0128 17:19:06.396774 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:19:06 crc kubenswrapper[4903]: I0128 17:19:06.396769 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:19:06 crc kubenswrapper[4903]: I0128 17:19:06.704469 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": read tcp 10.217.0.2:37508->10.217.1.26:9311: read: connection reset by peer" Jan 28 17:19:06 crc kubenswrapper[4903]: I0128 17:19:06.704612 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77c76bd4d4-k9f4j" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.26:9311/healthcheck\": read tcp 10.217.0.2:37494->10.217.1.26:9311: read: connection reset by peer" Jan 28 17:19:06 crc kubenswrapper[4903]: E0128 17:19:06.924506 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97095902_7fa9_4b3e_8b9e_db2b49cdc8b6.slice/crio-conmon-e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c.scope\": RecentStats: unable to find data in memory cache]" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.051326 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.227977 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data\") pod \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.228567 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-combined-ca-bundle\") pod \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.228847 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-logs\") pod \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.229003 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9kt6\" (UniqueName: \"kubernetes.io/projected/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-kube-api-access-z9kt6\") pod \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.229121 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data-custom\") pod \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\" (UID: \"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6\") " Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.229678 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-logs" (OuterVolumeSpecName: "logs") pod "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" (UID: "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.233214 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" (UID: "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.234210 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-kube-api-access-z9kt6" (OuterVolumeSpecName: "kube-api-access-z9kt6") pod "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" (UID: "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6"). InnerVolumeSpecName "kube-api-access-z9kt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.252980 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" (UID: "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.283754 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data" (OuterVolumeSpecName: "config-data") pod "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" (UID: "97095902-7fa9-4b3e-8b9e-db2b49cdc8b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.357956 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.357993 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.358003 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.358016 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.358026 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9kt6\" (UniqueName: \"kubernetes.io/projected/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6-kube-api-access-z9kt6\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.501639 4903 generic.go:334] "Generic (PLEG): container finished" podID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerID="e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c" exitCode=0 Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.501696 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77c76bd4d4-k9f4j" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.501700 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c76bd4d4-k9f4j" event={"ID":"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6","Type":"ContainerDied","Data":"e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c"} Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.501846 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77c76bd4d4-k9f4j" event={"ID":"97095902-7fa9-4b3e-8b9e-db2b49cdc8b6","Type":"ContainerDied","Data":"ea460d072164c6db465d0b389526b17eb11457937a11caefda60df825cbd080c"} Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.501868 4903 scope.go:117] "RemoveContainer" containerID="e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.531806 4903 scope.go:117] "RemoveContainer" containerID="49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.556972 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-77c76bd4d4-k9f4j"] Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.566935 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-77c76bd4d4-k9f4j"] Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.568003 4903 scope.go:117] "RemoveContainer" containerID="e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c" Jan 28 17:19:07 crc kubenswrapper[4903]: E0128 17:19:07.568495 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c\": container with ID starting with e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c not found: ID does not exist" containerID="e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.568548 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c"} err="failed to get container status \"e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c\": rpc error: code = NotFound desc = could not find container \"e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c\": container with ID starting with e8a599c8e89631279e4a347a96a2c0f99fa2b8b28276eb7eaa6175ec6319613c not found: ID does not exist" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.568573 4903 scope.go:117] "RemoveContainer" containerID="49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace" Jan 28 17:19:07 crc kubenswrapper[4903]: E0128 17:19:07.568860 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace\": container with ID starting with 49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace not found: ID does not exist" containerID="49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace" Jan 28 17:19:07 crc kubenswrapper[4903]: I0128 17:19:07.568913 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace"} err="failed to get container status \"49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace\": rpc error: code = NotFound desc = could not find container \"49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace\": container with ID starting with 49945dde8821a867cf52ccafdb34aaac553385b68363e19bef23887d762a4ace not found: ID does not exist" Jan 28 17:19:08 crc kubenswrapper[4903]: I0128 17:19:08.427962 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" path="/var/lib/kubelet/pods/97095902-7fa9-4b3e-8b9e-db2b49cdc8b6/volumes" Jan 28 17:19:23 crc kubenswrapper[4903]: I0128 17:19:23.061164 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k7dg8"] Jan 28 17:19:23 crc kubenswrapper[4903]: I0128 17:19:23.069878 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k7dg8"] Jan 28 17:19:24 crc kubenswrapper[4903]: I0128 17:19:24.422952 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b4159c-7539-40b4-9e70-4b3bf1b079df" path="/var/lib/kubelet/pods/97b4159c-7539-40b4-9e70-4b3bf1b079df/volumes" Jan 28 17:19:26 crc kubenswrapper[4903]: I0128 17:19:26.613949 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:19:26 crc kubenswrapper[4903]: I0128 17:19:26.614282 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.272178 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b7e1-account-create-update-tqrft"] Jan 28 17:19:37 crc kubenswrapper[4903]: E0128 17:19:37.273347 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64310972-c89b-4d07-b959-e7ab26705cd3" containerName="dnsmasq-dns" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273368 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64310972-c89b-4d07-b959-e7ab26705cd3" containerName="dnsmasq-dns" Jan 28 17:19:37 crc kubenswrapper[4903]: E0128 17:19:37.273382 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64310972-c89b-4d07-b959-e7ab26705cd3" containerName="init" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273392 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="64310972-c89b-4d07-b959-e7ab26705cd3" containerName="init" Jan 28 17:19:37 crc kubenswrapper[4903]: E0128 17:19:37.273429 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="extract-utilities" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273440 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="extract-utilities" Jan 28 17:19:37 crc kubenswrapper[4903]: E0128 17:19:37.273461 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273469 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" Jan 28 17:19:37 crc kubenswrapper[4903]: E0128 17:19:37.273483 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="registry-server" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273490 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="registry-server" Jan 28 17:19:37 crc kubenswrapper[4903]: E0128 17:19:37.273504 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273511 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" Jan 28 17:19:37 crc kubenswrapper[4903]: E0128 17:19:37.273521 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="extract-content" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273545 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="extract-content" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273744 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273766 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="97095902-7fa9-4b3e-8b9e-db2b49cdc8b6" containerName="barbican-api-log" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273779 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="64310972-c89b-4d07-b959-e7ab26705cd3" containerName="dnsmasq-dns" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.273795 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="86168cbe-c0fc-4436-bee5-01d30c25884a" containerName="registry-server" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.274492 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.277680 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.282447 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kptr5"] Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.283742 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.299189 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7e1-account-create-update-tqrft"] Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.323547 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kptr5"] Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.418889 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ca0ad53-779e-47b8-a2b1-89909a9e4660-operator-scripts\") pod \"neutron-db-create-kptr5\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.419227 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-operator-scripts\") pod \"neutron-b7e1-account-create-update-tqrft\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.419328 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkjv\" (UniqueName: \"kubernetes.io/projected/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-kube-api-access-znkjv\") pod \"neutron-b7e1-account-create-update-tqrft\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.419363 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99dw\" (UniqueName: \"kubernetes.io/projected/4ca0ad53-779e-47b8-a2b1-89909a9e4660-kube-api-access-v99dw\") pod \"neutron-db-create-kptr5\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.521662 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-operator-scripts\") pod \"neutron-b7e1-account-create-update-tqrft\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.522150 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znkjv\" (UniqueName: \"kubernetes.io/projected/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-kube-api-access-znkjv\") pod \"neutron-b7e1-account-create-update-tqrft\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.522258 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99dw\" (UniqueName: \"kubernetes.io/projected/4ca0ad53-779e-47b8-a2b1-89909a9e4660-kube-api-access-v99dw\") pod \"neutron-db-create-kptr5\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.522900 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ca0ad53-779e-47b8-a2b1-89909a9e4660-operator-scripts\") pod \"neutron-db-create-kptr5\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.523064 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-operator-scripts\") pod \"neutron-b7e1-account-create-update-tqrft\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.526257 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ca0ad53-779e-47b8-a2b1-89909a9e4660-operator-scripts\") pod \"neutron-db-create-kptr5\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.551444 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99dw\" (UniqueName: \"kubernetes.io/projected/4ca0ad53-779e-47b8-a2b1-89909a9e4660-kube-api-access-v99dw\") pod \"neutron-db-create-kptr5\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.552306 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znkjv\" (UniqueName: \"kubernetes.io/projected/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-kube-api-access-znkjv\") pod \"neutron-b7e1-account-create-update-tqrft\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.599411 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:37 crc kubenswrapper[4903]: I0128 17:19:37.608162 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.109322 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b7e1-account-create-update-tqrft"] Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.130583 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kptr5"] Jan 28 17:19:38 crc kubenswrapper[4903]: W0128 17:19:38.167224 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ca0ad53_779e_47b8_a2b1_89909a9e4660.slice/crio-e650d454f4539ce7051fd0faaf7ebde90335d1836e75987ca001cb18673b3641 WatchSource:0}: Error finding container e650d454f4539ce7051fd0faaf7ebde90335d1836e75987ca001cb18673b3641: Status 404 returned error can't find the container with id e650d454f4539ce7051fd0faaf7ebde90335d1836e75987ca001cb18673b3641 Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.774256 4903 generic.go:334] "Generic (PLEG): container finished" podID="4ca0ad53-779e-47b8-a2b1-89909a9e4660" containerID="081478be03050bcbf27057068a6e1ded2bd5896bf5def0eb768518af7caf7966" exitCode=0 Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.774351 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kptr5" event={"ID":"4ca0ad53-779e-47b8-a2b1-89909a9e4660","Type":"ContainerDied","Data":"081478be03050bcbf27057068a6e1ded2bd5896bf5def0eb768518af7caf7966"} Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.774416 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kptr5" event={"ID":"4ca0ad53-779e-47b8-a2b1-89909a9e4660","Type":"ContainerStarted","Data":"e650d454f4539ce7051fd0faaf7ebde90335d1836e75987ca001cb18673b3641"} Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.795974 4903 generic.go:334] "Generic (PLEG): container finished" podID="2ecb8d37-b0da-4a74-9ddf-ea994c2a8822" containerID="a1c9876ce5d33ec37ab1c02a6eb02594835306e4ccbdcd81e3ac9ba7609297a9" exitCode=0 Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.796022 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7e1-account-create-update-tqrft" event={"ID":"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822","Type":"ContainerDied","Data":"a1c9876ce5d33ec37ab1c02a6eb02594835306e4ccbdcd81e3ac9ba7609297a9"} Jan 28 17:19:38 crc kubenswrapper[4903]: I0128 17:19:38.796051 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7e1-account-create-update-tqrft" event={"ID":"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822","Type":"ContainerStarted","Data":"8f86e33fca8201eb74279352617facb6fdd04a8f22e847b0567c703177190a53"} Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.158475 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.163979 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.281264 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znkjv\" (UniqueName: \"kubernetes.io/projected/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-kube-api-access-znkjv\") pod \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.281482 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v99dw\" (UniqueName: \"kubernetes.io/projected/4ca0ad53-779e-47b8-a2b1-89909a9e4660-kube-api-access-v99dw\") pod \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.281560 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ca0ad53-779e-47b8-a2b1-89909a9e4660-operator-scripts\") pod \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\" (UID: \"4ca0ad53-779e-47b8-a2b1-89909a9e4660\") " Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.281614 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-operator-scripts\") pod \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\" (UID: \"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822\") " Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.282249 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ca0ad53-779e-47b8-a2b1-89909a9e4660-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ca0ad53-779e-47b8-a2b1-89909a9e4660" (UID: "4ca0ad53-779e-47b8-a2b1-89909a9e4660"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.282373 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ecb8d37-b0da-4a74-9ddf-ea994c2a8822" (UID: "2ecb8d37-b0da-4a74-9ddf-ea994c2a8822"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.283115 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ca0ad53-779e-47b8-a2b1-89909a9e4660-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.283150 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.287771 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ca0ad53-779e-47b8-a2b1-89909a9e4660-kube-api-access-v99dw" (OuterVolumeSpecName: "kube-api-access-v99dw") pod "4ca0ad53-779e-47b8-a2b1-89909a9e4660" (UID: "4ca0ad53-779e-47b8-a2b1-89909a9e4660"). InnerVolumeSpecName "kube-api-access-v99dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.289284 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-kube-api-access-znkjv" (OuterVolumeSpecName: "kube-api-access-znkjv") pod "2ecb8d37-b0da-4a74-9ddf-ea994c2a8822" (UID: "2ecb8d37-b0da-4a74-9ddf-ea994c2a8822"). InnerVolumeSpecName "kube-api-access-znkjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.384653 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v99dw\" (UniqueName: \"kubernetes.io/projected/4ca0ad53-779e-47b8-a2b1-89909a9e4660-kube-api-access-v99dw\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.384697 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znkjv\" (UniqueName: \"kubernetes.io/projected/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822-kube-api-access-znkjv\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.813062 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b7e1-account-create-update-tqrft" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.813056 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b7e1-account-create-update-tqrft" event={"ID":"2ecb8d37-b0da-4a74-9ddf-ea994c2a8822","Type":"ContainerDied","Data":"8f86e33fca8201eb74279352617facb6fdd04a8f22e847b0567c703177190a53"} Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.813217 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f86e33fca8201eb74279352617facb6fdd04a8f22e847b0567c703177190a53" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.814377 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kptr5" event={"ID":"4ca0ad53-779e-47b8-a2b1-89909a9e4660","Type":"ContainerDied","Data":"e650d454f4539ce7051fd0faaf7ebde90335d1836e75987ca001cb18673b3641"} Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.814400 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e650d454f4539ce7051fd0faaf7ebde90335d1836e75987ca001cb18673b3641" Jan 28 17:19:40 crc kubenswrapper[4903]: I0128 17:19:40.814450 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kptr5" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.437396 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-n5klz"] Jan 28 17:19:42 crc kubenswrapper[4903]: E0128 17:19:42.438133 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ca0ad53-779e-47b8-a2b1-89909a9e4660" containerName="mariadb-database-create" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.438153 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ca0ad53-779e-47b8-a2b1-89909a9e4660" containerName="mariadb-database-create" Jan 28 17:19:42 crc kubenswrapper[4903]: E0128 17:19:42.438200 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecb8d37-b0da-4a74-9ddf-ea994c2a8822" containerName="mariadb-account-create-update" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.438208 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecb8d37-b0da-4a74-9ddf-ea994c2a8822" containerName="mariadb-account-create-update" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.438384 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ca0ad53-779e-47b8-a2b1-89909a9e4660" containerName="mariadb-database-create" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.438416 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ecb8d37-b0da-4a74-9ddf-ea994c2a8822" containerName="mariadb-account-create-update" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.439105 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.442508 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.442540 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.443291 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r96fc" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.445625 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-n5klz"] Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.523707 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbfbx\" (UniqueName: \"kubernetes.io/projected/61e5c64f-8064-4fed-9bec-197f34e62bfb-kube-api-access-rbfbx\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.523761 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-combined-ca-bundle\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.524062 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-config\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.626987 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbfbx\" (UniqueName: \"kubernetes.io/projected/61e5c64f-8064-4fed-9bec-197f34e62bfb-kube-api-access-rbfbx\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.627078 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-combined-ca-bundle\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.627170 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-config\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.632723 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-combined-ca-bundle\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.632775 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-config\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.649646 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbfbx\" (UniqueName: \"kubernetes.io/projected/61e5c64f-8064-4fed-9bec-197f34e62bfb-kube-api-access-rbfbx\") pod \"neutron-db-sync-n5klz\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:42 crc kubenswrapper[4903]: I0128 17:19:42.759097 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:43 crc kubenswrapper[4903]: I0128 17:19:43.210231 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-n5klz"] Jan 28 17:19:43 crc kubenswrapper[4903]: I0128 17:19:43.845815 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n5klz" event={"ID":"61e5c64f-8064-4fed-9bec-197f34e62bfb","Type":"ContainerStarted","Data":"7e9f6d5affbbec650b8b75ce3a951dbc0b1a14767b5c346fe669313a196732e8"} Jan 28 17:19:43 crc kubenswrapper[4903]: I0128 17:19:43.846210 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n5klz" event={"ID":"61e5c64f-8064-4fed-9bec-197f34e62bfb","Type":"ContainerStarted","Data":"43ac77f849974174146f399d70e05a4d711a69fddd746fb934b540a8fe4b8984"} Jan 28 17:19:43 crc kubenswrapper[4903]: I0128 17:19:43.868853 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-n5klz" podStartSLOduration=1.868824515 podStartE2EDuration="1.868824515s" podCreationTimestamp="2026-01-28 17:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:43.861773073 +0000 UTC m=+5656.137744604" watchObservedRunningTime="2026-01-28 17:19:43.868824515 +0000 UTC m=+5656.144796036" Jan 28 17:19:47 crc kubenswrapper[4903]: E0128 17:19:47.807086 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61e5c64f_8064_4fed_9bec_197f34e62bfb.slice/crio-conmon-7e9f6d5affbbec650b8b75ce3a951dbc0b1a14767b5c346fe669313a196732e8.scope\": RecentStats: unable to find data in memory cache]" Jan 28 17:19:47 crc kubenswrapper[4903]: I0128 17:19:47.882071 4903 generic.go:334] "Generic (PLEG): container finished" podID="61e5c64f-8064-4fed-9bec-197f34e62bfb" containerID="7e9f6d5affbbec650b8b75ce3a951dbc0b1a14767b5c346fe669313a196732e8" exitCode=0 Jan 28 17:19:47 crc kubenswrapper[4903]: I0128 17:19:47.882116 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n5klz" event={"ID":"61e5c64f-8064-4fed-9bec-197f34e62bfb","Type":"ContainerDied","Data":"7e9f6d5affbbec650b8b75ce3a951dbc0b1a14767b5c346fe669313a196732e8"} Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.237584 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.357705 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbfbx\" (UniqueName: \"kubernetes.io/projected/61e5c64f-8064-4fed-9bec-197f34e62bfb-kube-api-access-rbfbx\") pod \"61e5c64f-8064-4fed-9bec-197f34e62bfb\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.357994 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-config\") pod \"61e5c64f-8064-4fed-9bec-197f34e62bfb\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.358091 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-combined-ca-bundle\") pod \"61e5c64f-8064-4fed-9bec-197f34e62bfb\" (UID: \"61e5c64f-8064-4fed-9bec-197f34e62bfb\") " Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.363210 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e5c64f-8064-4fed-9bec-197f34e62bfb-kube-api-access-rbfbx" (OuterVolumeSpecName: "kube-api-access-rbfbx") pod "61e5c64f-8064-4fed-9bec-197f34e62bfb" (UID: "61e5c64f-8064-4fed-9bec-197f34e62bfb"). InnerVolumeSpecName "kube-api-access-rbfbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.387742 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-config" (OuterVolumeSpecName: "config") pod "61e5c64f-8064-4fed-9bec-197f34e62bfb" (UID: "61e5c64f-8064-4fed-9bec-197f34e62bfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.389140 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61e5c64f-8064-4fed-9bec-197f34e62bfb" (UID: "61e5c64f-8064-4fed-9bec-197f34e62bfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.460723 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbfbx\" (UniqueName: \"kubernetes.io/projected/61e5c64f-8064-4fed-9bec-197f34e62bfb-kube-api-access-rbfbx\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.460772 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.460788 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e5c64f-8064-4fed-9bec-197f34e62bfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.906648 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n5klz" event={"ID":"61e5c64f-8064-4fed-9bec-197f34e62bfb","Type":"ContainerDied","Data":"43ac77f849974174146f399d70e05a4d711a69fddd746fb934b540a8fe4b8984"} Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.906689 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43ac77f849974174146f399d70e05a4d711a69fddd746fb934b540a8fe4b8984" Jan 28 17:19:49 crc kubenswrapper[4903]: I0128 17:19:49.906778 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n5klz" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.120588 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bfdfff4d7-wh5xv"] Jan 28 17:19:50 crc kubenswrapper[4903]: E0128 17:19:50.121205 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e5c64f-8064-4fed-9bec-197f34e62bfb" containerName="neutron-db-sync" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.121219 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e5c64f-8064-4fed-9bec-197f34e62bfb" containerName="neutron-db-sync" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.121389 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e5c64f-8064-4fed-9bec-197f34e62bfb" containerName="neutron-db-sync" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.122288 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.157332 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfdfff4d7-wh5xv"] Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.278386 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh5zw\" (UniqueName: \"kubernetes.io/projected/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-kube-api-access-nh5zw\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.278451 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-dns-svc\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.278514 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.278576 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-config\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.278642 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.299407 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6596bd8f56-8dd8h"] Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.301402 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.306234 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r96fc" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.306445 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.307029 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.307156 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.334392 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6596bd8f56-8dd8h"] Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384591 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh5zw\" (UniqueName: \"kubernetes.io/projected/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-kube-api-access-nh5zw\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384643 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-config\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384667 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-dns-svc\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384681 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-ovndb-tls-certs\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384712 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xr4\" (UniqueName: \"kubernetes.io/projected/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-kube-api-access-r9xr4\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384755 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384791 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-config\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384846 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384873 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-httpd-config\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.384906 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-combined-ca-bundle\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.386210 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-nb\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.386356 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-sb\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.386474 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-dns-svc\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.386256 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-config\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.407317 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh5zw\" (UniqueName: \"kubernetes.io/projected/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-kube-api-access-nh5zw\") pod \"dnsmasq-dns-7bfdfff4d7-wh5xv\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.458609 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.486743 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-httpd-config\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.486799 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-combined-ca-bundle\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.486840 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-config\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.486859 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-ovndb-tls-certs\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.486887 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9xr4\" (UniqueName: \"kubernetes.io/projected/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-kube-api-access-r9xr4\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.493718 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-httpd-config\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.494241 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-ovndb-tls-certs\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.495242 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-combined-ca-bundle\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.498324 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-config\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.510135 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9xr4\" (UniqueName: \"kubernetes.io/projected/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-kube-api-access-r9xr4\") pod \"neutron-6596bd8f56-8dd8h\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.633250 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.828109 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfdfff4d7-wh5xv"] Jan 28 17:19:50 crc kubenswrapper[4903]: I0128 17:19:50.923120 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" event={"ID":"9cc53a7e-4590-488e-a1c9-4a3f8a1baece","Type":"ContainerStarted","Data":"777aa5449da9578708a471c9cf54dfa3df59a0bae52dbe9f325862888e5379ee"} Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.277635 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6596bd8f56-8dd8h"] Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.933616 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6596bd8f56-8dd8h" event={"ID":"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201","Type":"ContainerStarted","Data":"5d6d8122efb4a39789583a27661cae3e668dec6b4b9b03b2cf966d81b6e5bc9a"} Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.933984 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.934001 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6596bd8f56-8dd8h" event={"ID":"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201","Type":"ContainerStarted","Data":"17d115ab2775241dd2074cb918029507d33eb101cc37e3982d882f28c3db6017"} Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.934017 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6596bd8f56-8dd8h" event={"ID":"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201","Type":"ContainerStarted","Data":"93728c985748bd812c188573894150f126c431a0b3f6f1f8d8ee4a3046e406b7"} Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.935119 4903 generic.go:334] "Generic (PLEG): container finished" podID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerID="33f92d9f7e88a8cfff37e8d7f5a6a9d874903049f567bd030447e0d769163ce7" exitCode=0 Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.935163 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" event={"ID":"9cc53a7e-4590-488e-a1c9-4a3f8a1baece","Type":"ContainerDied","Data":"33f92d9f7e88a8cfff37e8d7f5a6a9d874903049f567bd030447e0d769163ce7"} Jan 28 17:19:51 crc kubenswrapper[4903]: I0128 17:19:51.956043 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6596bd8f56-8dd8h" podStartSLOduration=1.956019194 podStartE2EDuration="1.956019194s" podCreationTimestamp="2026-01-28 17:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:51.951862861 +0000 UTC m=+5664.227834382" watchObservedRunningTime="2026-01-28 17:19:51.956019194 +0000 UTC m=+5664.231990705" Jan 28 17:19:52 crc kubenswrapper[4903]: I0128 17:19:52.952739 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" event={"ID":"9cc53a7e-4590-488e-a1c9-4a3f8a1baece","Type":"ContainerStarted","Data":"f0c10d7cdea84a83cf9c3a3a6574455fa48fcf7749778d22cb5fdd9882abd56e"} Jan 28 17:19:52 crc kubenswrapper[4903]: I0128 17:19:52.953110 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.003016 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" podStartSLOduration=3.002994583 podStartE2EDuration="3.002994583s" podCreationTimestamp="2026-01-28 17:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:52.984734878 +0000 UTC m=+5665.260706389" watchObservedRunningTime="2026-01-28 17:19:53.002994583 +0000 UTC m=+5665.278966104" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.004892 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75497f8b65-6bx5m"] Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.006723 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.013642 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.013835 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.017317 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75497f8b65-6bx5m"] Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.136228 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-httpd-config\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.136578 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-combined-ca-bundle\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.136712 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-config\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.136871 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-internal-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.136999 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-public-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.137115 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-ovndb-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.137235 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwhk\" (UniqueName: \"kubernetes.io/projected/faf2d09d-c016-4be2-b534-6d33865c9a46-kube-api-access-ckwhk\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.238384 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckwhk\" (UniqueName: \"kubernetes.io/projected/faf2d09d-c016-4be2-b534-6d33865c9a46-kube-api-access-ckwhk\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.238483 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-httpd-config\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.238519 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-combined-ca-bundle\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.238580 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-config\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.238625 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-internal-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.238659 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-public-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.238678 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-ovndb-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.244410 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-config\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.245333 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-ovndb-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.246332 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-internal-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.248183 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-httpd-config\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.248417 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-combined-ca-bundle\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.253328 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf2d09d-c016-4be2-b534-6d33865c9a46-public-tls-certs\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.257288 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckwhk\" (UniqueName: \"kubernetes.io/projected/faf2d09d-c016-4be2-b534-6d33865c9a46-kube-api-access-ckwhk\") pod \"neutron-75497f8b65-6bx5m\" (UID: \"faf2d09d-c016-4be2-b534-6d33865c9a46\") " pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.340439 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.922558 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75497f8b65-6bx5m"] Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.961367 4903 scope.go:117] "RemoveContainer" containerID="89a3ea293d417a46807ee93d90b0cc278be8ba2eb0e87729c6e816b00ae566b5" Jan 28 17:19:53 crc kubenswrapper[4903]: I0128 17:19:53.965117 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75497f8b65-6bx5m" event={"ID":"faf2d09d-c016-4be2-b534-6d33865c9a46","Type":"ContainerStarted","Data":"0e754d4ed09170efa71b3357131c5c6c50270861fd3f95b47b0ddca329577561"} Jan 28 17:19:54 crc kubenswrapper[4903]: I0128 17:19:54.974928 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75497f8b65-6bx5m" event={"ID":"faf2d09d-c016-4be2-b534-6d33865c9a46","Type":"ContainerStarted","Data":"2fddb451a351600e7a41bc20c6d3d0350094fd3aa6284bf66d39ccef72a8e28f"} Jan 28 17:19:54 crc kubenswrapper[4903]: I0128 17:19:54.975391 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75497f8b65-6bx5m" event={"ID":"faf2d09d-c016-4be2-b534-6d33865c9a46","Type":"ContainerStarted","Data":"ba7c34307b2461437ac53fa11fe7811b8f52af94f7c27d16981bee184706767a"} Jan 28 17:19:54 crc kubenswrapper[4903]: I0128 17:19:54.975438 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:19:55 crc kubenswrapper[4903]: I0128 17:19:55.017742 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75497f8b65-6bx5m" podStartSLOduration=3.0177192330000002 podStartE2EDuration="3.017719233s" podCreationTimestamp="2026-01-28 17:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:19:55.004966627 +0000 UTC m=+5667.280938138" watchObservedRunningTime="2026-01-28 17:19:55.017719233 +0000 UTC m=+5667.293690744" Jan 28 17:19:56 crc kubenswrapper[4903]: I0128 17:19:56.614225 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:19:56 crc kubenswrapper[4903]: I0128 17:19:56.614737 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:20:00 crc kubenswrapper[4903]: I0128 17:20:00.461743 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:20:00 crc kubenswrapper[4903]: I0128 17:20:00.524246 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-746c85cf5f-cc6xg"] Jan 28 17:20:00 crc kubenswrapper[4903]: I0128 17:20:00.524713 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" podUID="6768af8e-8766-42db-95dd-802258413317" containerName="dnsmasq-dns" containerID="cri-o://52e35fb21df20d0136d93a5ea43e22dc5ac41a80bf42076d9d2fd67c2e7681d6" gracePeriod=10 Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.023121 4903 generic.go:334] "Generic (PLEG): container finished" podID="6768af8e-8766-42db-95dd-802258413317" containerID="52e35fb21df20d0136d93a5ea43e22dc5ac41a80bf42076d9d2fd67c2e7681d6" exitCode=0 Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.023611 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" event={"ID":"6768af8e-8766-42db-95dd-802258413317","Type":"ContainerDied","Data":"52e35fb21df20d0136d93a5ea43e22dc5ac41a80bf42076d9d2fd67c2e7681d6"} Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.023647 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" event={"ID":"6768af8e-8766-42db-95dd-802258413317","Type":"ContainerDied","Data":"facd10890e6818896f5f8792225c983b19bba19ebc3918ecfbf8fd4d30ce44bb"} Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.023735 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="facd10890e6818896f5f8792225c983b19bba19ebc3918ecfbf8fd4d30ce44bb" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.035487 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.107168 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-config\") pod \"6768af8e-8766-42db-95dd-802258413317\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.107253 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-dns-svc\") pod \"6768af8e-8766-42db-95dd-802258413317\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.107317 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-nb\") pod \"6768af8e-8766-42db-95dd-802258413317\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.107359 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xzkn\" (UniqueName: \"kubernetes.io/projected/6768af8e-8766-42db-95dd-802258413317-kube-api-access-2xzkn\") pod \"6768af8e-8766-42db-95dd-802258413317\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.107392 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-sb\") pod \"6768af8e-8766-42db-95dd-802258413317\" (UID: \"6768af8e-8766-42db-95dd-802258413317\") " Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.133960 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6768af8e-8766-42db-95dd-802258413317-kube-api-access-2xzkn" (OuterVolumeSpecName: "kube-api-access-2xzkn") pod "6768af8e-8766-42db-95dd-802258413317" (UID: "6768af8e-8766-42db-95dd-802258413317"). InnerVolumeSpecName "kube-api-access-2xzkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.181354 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6768af8e-8766-42db-95dd-802258413317" (UID: "6768af8e-8766-42db-95dd-802258413317"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.202042 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-config" (OuterVolumeSpecName: "config") pod "6768af8e-8766-42db-95dd-802258413317" (UID: "6768af8e-8766-42db-95dd-802258413317"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.209750 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.209786 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.209797 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xzkn\" (UniqueName: \"kubernetes.io/projected/6768af8e-8766-42db-95dd-802258413317-kube-api-access-2xzkn\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.214062 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6768af8e-8766-42db-95dd-802258413317" (UID: "6768af8e-8766-42db-95dd-802258413317"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.215873 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6768af8e-8766-42db-95dd-802258413317" (UID: "6768af8e-8766-42db-95dd-802258413317"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.310624 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:01 crc kubenswrapper[4903]: I0128 17:20:01.310655 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6768af8e-8766-42db-95dd-802258413317-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:02 crc kubenswrapper[4903]: I0128 17:20:02.031093 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" Jan 28 17:20:02 crc kubenswrapper[4903]: I0128 17:20:02.063610 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-746c85cf5f-cc6xg"] Jan 28 17:20:02 crc kubenswrapper[4903]: I0128 17:20:02.074111 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-746c85cf5f-cc6xg"] Jan 28 17:20:02 crc kubenswrapper[4903]: I0128 17:20:02.425743 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6768af8e-8766-42db-95dd-802258413317" path="/var/lib/kubelet/pods/6768af8e-8766-42db-95dd-802258413317/volumes" Jan 28 17:20:05 crc kubenswrapper[4903]: I0128 17:20:05.990924 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-746c85cf5f-cc6xg" podUID="6768af8e-8766-42db-95dd-802258413317" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.25:5353: i/o timeout" Jan 28 17:20:20 crc kubenswrapper[4903]: I0128 17:20:20.644721 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:20:23 crc kubenswrapper[4903]: I0128 17:20:23.354051 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75497f8b65-6bx5m" Jan 28 17:20:23 crc kubenswrapper[4903]: I0128 17:20:23.424158 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6596bd8f56-8dd8h"] Jan 28 17:20:23 crc kubenswrapper[4903]: I0128 17:20:23.424395 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6596bd8f56-8dd8h" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-api" containerID="cri-o://17d115ab2775241dd2074cb918029507d33eb101cc37e3982d882f28c3db6017" gracePeriod=30 Jan 28 17:20:23 crc kubenswrapper[4903]: I0128 17:20:23.424522 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6596bd8f56-8dd8h" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-httpd" containerID="cri-o://5d6d8122efb4a39789583a27661cae3e668dec6b4b9b03b2cf966d81b6e5bc9a" gracePeriod=30 Jan 28 17:20:24 crc kubenswrapper[4903]: I0128 17:20:24.213902 4903 generic.go:334] "Generic (PLEG): container finished" podID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerID="5d6d8122efb4a39789583a27661cae3e668dec6b4b9b03b2cf966d81b6e5bc9a" exitCode=0 Jan 28 17:20:24 crc kubenswrapper[4903]: I0128 17:20:24.213950 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6596bd8f56-8dd8h" event={"ID":"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201","Type":"ContainerDied","Data":"5d6d8122efb4a39789583a27661cae3e668dec6b4b9b03b2cf966d81b6e5bc9a"} Jan 28 17:20:26 crc kubenswrapper[4903]: I0128 17:20:26.614090 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:20:26 crc kubenswrapper[4903]: I0128 17:20:26.614806 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:20:26 crc kubenswrapper[4903]: I0128 17:20:26.614893 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:20:26 crc kubenswrapper[4903]: I0128 17:20:26.616334 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:20:26 crc kubenswrapper[4903]: I0128 17:20:26.616428 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" gracePeriod=600 Jan 28 17:20:26 crc kubenswrapper[4903]: E0128 17:20:26.739837 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:20:27 crc kubenswrapper[4903]: I0128 17:20:27.235109 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" exitCode=0 Jan 28 17:20:27 crc kubenswrapper[4903]: I0128 17:20:27.235155 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce"} Jan 28 17:20:27 crc kubenswrapper[4903]: I0128 17:20:27.235188 4903 scope.go:117] "RemoveContainer" containerID="7378b6481e12992f0a6ba3f03ca88e1ce24c2396b78c53e3ea7dd86651deb56a" Jan 28 17:20:27 crc kubenswrapper[4903]: I0128 17:20:27.235745 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:20:27 crc kubenswrapper[4903]: E0128 17:20:27.236025 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.253483 4903 generic.go:334] "Generic (PLEG): container finished" podID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerID="17d115ab2775241dd2074cb918029507d33eb101cc37e3982d882f28c3db6017" exitCode=0 Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.254064 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6596bd8f56-8dd8h" event={"ID":"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201","Type":"ContainerDied","Data":"17d115ab2775241dd2074cb918029507d33eb101cc37e3982d882f28c3db6017"} Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.254095 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6596bd8f56-8dd8h" event={"ID":"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201","Type":"ContainerDied","Data":"93728c985748bd812c188573894150f126c431a0b3f6f1f8d8ee4a3046e406b7"} Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.254106 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93728c985748bd812c188573894150f126c431a0b3f6f1f8d8ee4a3046e406b7" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.291155 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.318440 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-combined-ca-bundle\") pod \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.318503 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9xr4\" (UniqueName: \"kubernetes.io/projected/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-kube-api-access-r9xr4\") pod \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.318541 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-ovndb-tls-certs\") pod \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.318709 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-httpd-config\") pod \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.318731 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-config\") pod \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\" (UID: \"2c27dff1-b445-43ac-b8d9-7b6bbfe9f201\") " Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.334126 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-kube-api-access-r9xr4" (OuterVolumeSpecName: "kube-api-access-r9xr4") pod "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" (UID: "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201"). InnerVolumeSpecName "kube-api-access-r9xr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.337543 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" (UID: "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.374654 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" (UID: "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.378108 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-config" (OuterVolumeSpecName: "config") pod "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" (UID: "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.397558 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" (UID: "2c27dff1-b445-43ac-b8d9-7b6bbfe9f201"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.420627 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.420843 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9xr4\" (UniqueName: \"kubernetes.io/projected/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-kube-api-access-r9xr4\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.420965 4903 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.421050 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:29 crc kubenswrapper[4903]: I0128 17:20:29.421134 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:30 crc kubenswrapper[4903]: I0128 17:20:30.259172 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6596bd8f56-8dd8h" Jan 28 17:20:30 crc kubenswrapper[4903]: I0128 17:20:30.301079 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6596bd8f56-8dd8h"] Jan 28 17:20:30 crc kubenswrapper[4903]: I0128 17:20:30.308582 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6596bd8f56-8dd8h"] Jan 28 17:20:30 crc kubenswrapper[4903]: I0128 17:20:30.426247 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" path="/var/lib/kubelet/pods/2c27dff1-b445-43ac-b8d9-7b6bbfe9f201/volumes" Jan 28 17:20:38 crc kubenswrapper[4903]: I0128 17:20:38.420820 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:20:38 crc kubenswrapper[4903]: E0128 17:20:38.422192 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.082410 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-t2mrw"] Jan 28 17:20:42 crc kubenswrapper[4903]: E0128 17:20:42.083405 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-httpd" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.083420 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-httpd" Jan 28 17:20:42 crc kubenswrapper[4903]: E0128 17:20:42.083446 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-api" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.083451 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-api" Jan 28 17:20:42 crc kubenswrapper[4903]: E0128 17:20:42.083467 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6768af8e-8766-42db-95dd-802258413317" containerName="init" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.083473 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6768af8e-8766-42db-95dd-802258413317" containerName="init" Jan 28 17:20:42 crc kubenswrapper[4903]: E0128 17:20:42.083484 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6768af8e-8766-42db-95dd-802258413317" containerName="dnsmasq-dns" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.083490 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6768af8e-8766-42db-95dd-802258413317" containerName="dnsmasq-dns" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.083671 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-api" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.083683 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c27dff1-b445-43ac-b8d9-7b6bbfe9f201" containerName="neutron-httpd" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.083715 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6768af8e-8766-42db-95dd-802258413317" containerName="dnsmasq-dns" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.084255 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.087349 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.087432 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.087505 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.087357 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.092751 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6jflm" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.106768 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-t2mrw"] Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.116280 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fv8ml"] Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.118086 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.135482 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-t2mrw"] Jan 28 17:20:42 crc kubenswrapper[4903]: E0128 17:20:42.136614 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-tb7rl ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-t2mrw" podUID="477ec7dd-bd00-4539-ab68-d6176ca5514e" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155147 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-combined-ca-bundle\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155214 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-dispersionconf\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155250 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-scripts\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155298 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-ring-data-devices\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155333 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-swiftconf\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155364 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-scripts\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155399 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/477ec7dd-bd00-4539-ab68-d6176ca5514e-etc-swift\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155427 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkm8\" (UniqueName: \"kubernetes.io/projected/2a26f791-5856-41c5-88d3-fb91e564f8ac-kube-api-access-qfkm8\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155447 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-swiftconf\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155477 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7rl\" (UniqueName: \"kubernetes.io/projected/477ec7dd-bd00-4539-ab68-d6176ca5514e-kube-api-access-tb7rl\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155507 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-combined-ca-bundle\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155565 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-ring-data-devices\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155601 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a26f791-5856-41c5-88d3-fb91e564f8ac-etc-swift\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.155646 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-dispersionconf\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.174599 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fv8ml"] Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.256767 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7rl\" (UniqueName: \"kubernetes.io/projected/477ec7dd-bd00-4539-ab68-d6176ca5514e-kube-api-access-tb7rl\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.256819 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-combined-ca-bundle\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.256853 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-ring-data-devices\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.256891 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a26f791-5856-41c5-88d3-fb91e564f8ac-etc-swift\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.256931 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-dispersionconf\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.256964 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-combined-ca-bundle\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.256983 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-dispersionconf\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.257005 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-scripts\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.257046 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-ring-data-devices\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.257069 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-swiftconf\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.257092 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-scripts\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.257115 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/477ec7dd-bd00-4539-ab68-d6176ca5514e-etc-swift\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.257134 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkm8\" (UniqueName: \"kubernetes.io/projected/2a26f791-5856-41c5-88d3-fb91e564f8ac-kube-api-access-qfkm8\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.257150 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-swiftconf\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.258114 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-scripts\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.258821 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-ring-data-devices\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.261765 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/477ec7dd-bd00-4539-ab68-d6176ca5514e-etc-swift\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.264499 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd66ff975-qnwdt"] Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.265608 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-dispersionconf\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.266022 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a26f791-5856-41c5-88d3-fb91e564f8ac-etc-swift\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.266702 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-combined-ca-bundle\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.267120 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.274964 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-scripts\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.276618 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-ring-data-devices\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.276849 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-swiftconf\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.277202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-swiftconf\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.281806 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-combined-ca-bundle\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.296121 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd66ff975-qnwdt"] Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.299596 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7rl\" (UniqueName: \"kubernetes.io/projected/477ec7dd-bd00-4539-ab68-d6176ca5514e-kube-api-access-tb7rl\") pod \"swift-ring-rebalance-t2mrw\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.299618 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-dispersionconf\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.303020 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfkm8\" (UniqueName: \"kubernetes.io/projected/2a26f791-5856-41c5-88d3-fb91e564f8ac-kube-api-access-qfkm8\") pod \"swift-ring-rebalance-fv8ml\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.352927 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.359436 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-dns-svc\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.359520 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhq6v\" (UniqueName: \"kubernetes.io/projected/f719d804-3532-4619-b702-61e91ff99905-kube-api-access-hhq6v\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.359658 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.359685 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.359711 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-config\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.385808 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.445917 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.460838 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/477ec7dd-bd00-4539-ab68-d6176ca5514e-etc-swift\") pod \"477ec7dd-bd00-4539-ab68-d6176ca5514e\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461155 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-swiftconf\") pod \"477ec7dd-bd00-4539-ab68-d6176ca5514e\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461179 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-scripts\") pod \"477ec7dd-bd00-4539-ab68-d6176ca5514e\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461213 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-ring-data-devices\") pod \"477ec7dd-bd00-4539-ab68-d6176ca5514e\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461302 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-dispersionconf\") pod \"477ec7dd-bd00-4539-ab68-d6176ca5514e\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461355 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-combined-ca-bundle\") pod \"477ec7dd-bd00-4539-ab68-d6176ca5514e\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461373 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb7rl\" (UniqueName: \"kubernetes.io/projected/477ec7dd-bd00-4539-ab68-d6176ca5514e-kube-api-access-tb7rl\") pod \"477ec7dd-bd00-4539-ab68-d6176ca5514e\" (UID: \"477ec7dd-bd00-4539-ab68-d6176ca5514e\") " Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461585 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-dns-svc\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461694 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhq6v\" (UniqueName: \"kubernetes.io/projected/f719d804-3532-4619-b702-61e91ff99905-kube-api-access-hhq6v\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461718 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461740 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461763 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-config\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.461950 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-scripts" (OuterVolumeSpecName: "scripts") pod "477ec7dd-bd00-4539-ab68-d6176ca5514e" (UID: "477ec7dd-bd00-4539-ab68-d6176ca5514e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.462213 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477ec7dd-bd00-4539-ab68-d6176ca5514e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "477ec7dd-bd00-4539-ab68-d6176ca5514e" (UID: "477ec7dd-bd00-4539-ab68-d6176ca5514e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.466205 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "477ec7dd-bd00-4539-ab68-d6176ca5514e" (UID: "477ec7dd-bd00-4539-ab68-d6176ca5514e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.466238 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-dns-svc\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.466955 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.467966 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "477ec7dd-bd00-4539-ab68-d6176ca5514e" (UID: "477ec7dd-bd00-4539-ab68-d6176ca5514e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.468509 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-config\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.468719 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "477ec7dd-bd00-4539-ab68-d6176ca5514e" (UID: "477ec7dd-bd00-4539-ab68-d6176ca5514e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.469082 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "477ec7dd-bd00-4539-ab68-d6176ca5514e" (UID: "477ec7dd-bd00-4539-ab68-d6176ca5514e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.474781 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477ec7dd-bd00-4539-ab68-d6176ca5514e-kube-api-access-tb7rl" (OuterVolumeSpecName: "kube-api-access-tb7rl") pod "477ec7dd-bd00-4539-ab68-d6176ca5514e" (UID: "477ec7dd-bd00-4539-ab68-d6176ca5514e"). InnerVolumeSpecName "kube-api-access-tb7rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.474935 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.490122 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhq6v\" (UniqueName: \"kubernetes.io/projected/f719d804-3532-4619-b702-61e91ff99905-kube-api-access-hhq6v\") pod \"dnsmasq-dns-5fd66ff975-qnwdt\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.565621 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.565662 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb7rl\" (UniqueName: \"kubernetes.io/projected/477ec7dd-bd00-4539-ab68-d6176ca5514e-kube-api-access-tb7rl\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.565678 4903 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/477ec7dd-bd00-4539-ab68-d6176ca5514e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.565690 4903 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.565700 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.565713 4903 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/477ec7dd-bd00-4539-ab68-d6176ca5514e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.565723 4903 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/477ec7dd-bd00-4539-ab68-d6176ca5514e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.681726 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:42 crc kubenswrapper[4903]: I0128 17:20:42.949080 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fv8ml"] Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.228958 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd66ff975-qnwdt"] Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.364774 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" event={"ID":"f719d804-3532-4619-b702-61e91ff99905","Type":"ContainerStarted","Data":"f5d0a0deac77fba402be181f5afa5d8d597d961a187f35d7112e695cdb688138"} Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.369893 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-t2mrw" Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.370067 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fv8ml" event={"ID":"2a26f791-5856-41c5-88d3-fb91e564f8ac","Type":"ContainerStarted","Data":"1aef0e23fcddb0c078652b9916d54786d61712e13a9e006116cb63df68b94cdf"} Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.370119 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fv8ml" event={"ID":"2a26f791-5856-41c5-88d3-fb91e564f8ac","Type":"ContainerStarted","Data":"80bf3450b319b0654cd15558a99790fd364ada64a9fab6f0c0cc7265d83fe509"} Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.393044 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-fv8ml" podStartSLOduration=1.393022647 podStartE2EDuration="1.393022647s" podCreationTimestamp="2026-01-28 17:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:20:43.390411666 +0000 UTC m=+5715.666383177" watchObservedRunningTime="2026-01-28 17:20:43.393022647 +0000 UTC m=+5715.668994148" Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.544657 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-t2mrw"] Jan 28 17:20:43 crc kubenswrapper[4903]: I0128 17:20:43.553008 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-t2mrw"] Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.312188 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6b8999f9d6-ffjv7"] Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.313752 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.316226 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.332460 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b8999f9d6-ffjv7"] Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.386201 4903 generic.go:334] "Generic (PLEG): container finished" podID="f719d804-3532-4619-b702-61e91ff99905" containerID="8bf41b837c65786518621ee351b531ac5c00c4e5c10b0963b2b2adb613c98db0" exitCode=0 Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.386605 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" event={"ID":"f719d804-3532-4619-b702-61e91ff99905","Type":"ContainerDied","Data":"8bf41b837c65786518621ee351b531ac5c00c4e5c10b0963b2b2adb613c98db0"} Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.407375 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9dq9\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-kube-api-access-f9dq9\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.407769 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-etc-swift\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.407915 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-config-data\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.408050 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-combined-ca-bundle\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.408178 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-run-httpd\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.408284 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-log-httpd\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.427002 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477ec7dd-bd00-4539-ab68-d6176ca5514e" path="/var/lib/kubelet/pods/477ec7dd-bd00-4539-ab68-d6176ca5514e/volumes" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.510046 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-etc-swift\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.510199 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-config-data\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.510228 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-combined-ca-bundle\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.510303 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-run-httpd\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.510334 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-log-httpd\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.510479 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9dq9\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-kube-api-access-f9dq9\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.511610 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-log-httpd\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.511923 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-run-httpd\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.514887 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-combined-ca-bundle\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.515480 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-etc-swift\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.518470 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-config-data\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.532708 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9dq9\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-kube-api-access-f9dq9\") pod \"swift-proxy-6b8999f9d6-ffjv7\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:44 crc kubenswrapper[4903]: I0128 17:20:44.632002 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:45 crc kubenswrapper[4903]: I0128 17:20:45.309018 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b8999f9d6-ffjv7"] Jan 28 17:20:45 crc kubenswrapper[4903]: I0128 17:20:45.399479 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" event={"ID":"1c096b1b-286b-40a3-a5c2-f68189079513","Type":"ContainerStarted","Data":"c1506ebe984fa262a726ef020add48dba0e8d1a9c15be68ca59f4d8ffdea6107"} Jan 28 17:20:45 crc kubenswrapper[4903]: I0128 17:20:45.402160 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" event={"ID":"f719d804-3532-4619-b702-61e91ff99905","Type":"ContainerStarted","Data":"8c51c6d0adc0b7bd2e1b3a4932a79291c6b97b683805065523aedfe04c911b7b"} Jan 28 17:20:45 crc kubenswrapper[4903]: I0128 17:20:45.404230 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.426703 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.427013 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" event={"ID":"1c096b1b-286b-40a3-a5c2-f68189079513","Type":"ContainerStarted","Data":"01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482"} Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.427032 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.427040 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" event={"ID":"1c096b1b-286b-40a3-a5c2-f68189079513","Type":"ContainerStarted","Data":"df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624"} Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.443886 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" podStartSLOduration=4.443867072 podStartE2EDuration="4.443867072s" podCreationTimestamp="2026-01-28 17:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:20:45.426035653 +0000 UTC m=+5717.702007174" watchObservedRunningTime="2026-01-28 17:20:46.443867072 +0000 UTC m=+5718.719838583" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.446157 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" podStartSLOduration=2.446147244 podStartE2EDuration="2.446147244s" podCreationTimestamp="2026-01-28 17:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:20:46.436745318 +0000 UTC m=+5718.712716829" watchObservedRunningTime="2026-01-28 17:20:46.446147244 +0000 UTC m=+5718.722118755" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.518311 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5df94859fd-ftzxk"] Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.536253 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.539951 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.540215 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554142 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-etc-swift\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554281 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-log-httpd\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554316 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ptn\" (UniqueName: \"kubernetes.io/projected/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-kube-api-access-79ptn\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554339 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-internal-tls-certs\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554400 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-combined-ca-bundle\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554424 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-config-data\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554448 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-public-tls-certs\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.554470 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-run-httpd\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.565004 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5df94859fd-ftzxk"] Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656239 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-combined-ca-bundle\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656305 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-config-data\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656342 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-public-tls-certs\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656367 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-run-httpd\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656403 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-etc-swift\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656491 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-log-httpd\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656600 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79ptn\" (UniqueName: \"kubernetes.io/projected/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-kube-api-access-79ptn\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.656624 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-internal-tls-certs\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.657414 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-run-httpd\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.657934 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-log-httpd\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.662094 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-public-tls-certs\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.663173 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-combined-ca-bundle\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.663735 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-etc-swift\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.664324 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-internal-tls-certs\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.665201 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-config-data\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.676776 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79ptn\" (UniqueName: \"kubernetes.io/projected/d5f34185-b832-4eb0-a3b6-61e7af5f96ec-kube-api-access-79ptn\") pod \"swift-proxy-5df94859fd-ftzxk\" (UID: \"d5f34185-b832-4eb0-a3b6-61e7af5f96ec\") " pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:46 crc kubenswrapper[4903]: I0128 17:20:46.879598 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:47 crc kubenswrapper[4903]: I0128 17:20:47.569901 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5df94859fd-ftzxk"] Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.434365 4903 generic.go:334] "Generic (PLEG): container finished" podID="2a26f791-5856-41c5-88d3-fb91e564f8ac" containerID="1aef0e23fcddb0c078652b9916d54786d61712e13a9e006116cb63df68b94cdf" exitCode=0 Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.434710 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fv8ml" event={"ID":"2a26f791-5856-41c5-88d3-fb91e564f8ac","Type":"ContainerDied","Data":"1aef0e23fcddb0c078652b9916d54786d61712e13a9e006116cb63df68b94cdf"} Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.450182 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5df94859fd-ftzxk" event={"ID":"d5f34185-b832-4eb0-a3b6-61e7af5f96ec","Type":"ContainerStarted","Data":"30cb15fe423717ecfeaf7e5382bdf10ece8337f3ac5c6f3645cbe3d19e0ffbb5"} Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.450261 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5df94859fd-ftzxk" event={"ID":"d5f34185-b832-4eb0-a3b6-61e7af5f96ec","Type":"ContainerStarted","Data":"ae7e3abf1c2849c0015735ca3e80835d8f471ca2119ae6f5383bc98cd1929442"} Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.450288 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5df94859fd-ftzxk" event={"ID":"d5f34185-b832-4eb0-a3b6-61e7af5f96ec","Type":"ContainerStarted","Data":"0c28e9561b57cfe37552c58ff32d3c67c184b20d671599ca451e9302a353bce1"} Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.451592 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.451730 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:48 crc kubenswrapper[4903]: I0128 17:20:48.498110 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5df94859fd-ftzxk" podStartSLOduration=2.498092544 podStartE2EDuration="2.498092544s" podCreationTimestamp="2026-01-28 17:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:20:48.488219747 +0000 UTC m=+5720.764191268" watchObservedRunningTime="2026-01-28 17:20:48.498092544 +0000 UTC m=+5720.774064055" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.413909 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:20:49 crc kubenswrapper[4903]: E0128 17:20:49.414583 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.815483 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.928686 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-swiftconf\") pod \"2a26f791-5856-41c5-88d3-fb91e564f8ac\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.928755 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfkm8\" (UniqueName: \"kubernetes.io/projected/2a26f791-5856-41c5-88d3-fb91e564f8ac-kube-api-access-qfkm8\") pod \"2a26f791-5856-41c5-88d3-fb91e564f8ac\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.928867 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-combined-ca-bundle\") pod \"2a26f791-5856-41c5-88d3-fb91e564f8ac\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.928938 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a26f791-5856-41c5-88d3-fb91e564f8ac-etc-swift\") pod \"2a26f791-5856-41c5-88d3-fb91e564f8ac\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.929024 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-ring-data-devices\") pod \"2a26f791-5856-41c5-88d3-fb91e564f8ac\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.929084 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-dispersionconf\") pod \"2a26f791-5856-41c5-88d3-fb91e564f8ac\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.929111 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-scripts\") pod \"2a26f791-5856-41c5-88d3-fb91e564f8ac\" (UID: \"2a26f791-5856-41c5-88d3-fb91e564f8ac\") " Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.930150 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2a26f791-5856-41c5-88d3-fb91e564f8ac" (UID: "2a26f791-5856-41c5-88d3-fb91e564f8ac"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.930546 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a26f791-5856-41c5-88d3-fb91e564f8ac-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2a26f791-5856-41c5-88d3-fb91e564f8ac" (UID: "2a26f791-5856-41c5-88d3-fb91e564f8ac"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.935567 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a26f791-5856-41c5-88d3-fb91e564f8ac-kube-api-access-qfkm8" (OuterVolumeSpecName: "kube-api-access-qfkm8") pod "2a26f791-5856-41c5-88d3-fb91e564f8ac" (UID: "2a26f791-5856-41c5-88d3-fb91e564f8ac"). InnerVolumeSpecName "kube-api-access-qfkm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.939821 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2a26f791-5856-41c5-88d3-fb91e564f8ac" (UID: "2a26f791-5856-41c5-88d3-fb91e564f8ac"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.958360 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a26f791-5856-41c5-88d3-fb91e564f8ac" (UID: "2a26f791-5856-41c5-88d3-fb91e564f8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.980028 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2a26f791-5856-41c5-88d3-fb91e564f8ac" (UID: "2a26f791-5856-41c5-88d3-fb91e564f8ac"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:49 crc kubenswrapper[4903]: I0128 17:20:49.986428 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-scripts" (OuterVolumeSpecName: "scripts") pod "2a26f791-5856-41c5-88d3-fb91e564f8ac" (UID: "2a26f791-5856-41c5-88d3-fb91e564f8ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.031716 4903 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.031786 4903 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.031798 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a26f791-5856-41c5-88d3-fb91e564f8ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.031809 4903 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.031823 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfkm8\" (UniqueName: \"kubernetes.io/projected/2a26f791-5856-41c5-88d3-fb91e564f8ac-kube-api-access-qfkm8\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.031836 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a26f791-5856-41c5-88d3-fb91e564f8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.031847 4903 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a26f791-5856-41c5-88d3-fb91e564f8ac-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.469570 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fv8ml" event={"ID":"2a26f791-5856-41c5-88d3-fb91e564f8ac","Type":"ContainerDied","Data":"80bf3450b319b0654cd15558a99790fd364ada64a9fab6f0c0cc7265d83fe509"} Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.469620 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80bf3450b319b0654cd15558a99790fd364ada64a9fab6f0c0cc7265d83fe509" Jan 28 17:20:50 crc kubenswrapper[4903]: I0128 17:20:50.469636 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fv8ml" Jan 28 17:20:52 crc kubenswrapper[4903]: I0128 17:20:52.683803 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:20:52 crc kubenswrapper[4903]: I0128 17:20:52.754839 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfdfff4d7-wh5xv"] Jan 28 17:20:52 crc kubenswrapper[4903]: I0128 17:20:52.755097 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" podUID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerName="dnsmasq-dns" containerID="cri-o://f0c10d7cdea84a83cf9c3a3a6574455fa48fcf7749778d22cb5fdd9882abd56e" gracePeriod=10 Jan 28 17:20:53 crc kubenswrapper[4903]: I0128 17:20:53.499586 4903 generic.go:334] "Generic (PLEG): container finished" podID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerID="f0c10d7cdea84a83cf9c3a3a6574455fa48fcf7749778d22cb5fdd9882abd56e" exitCode=0 Jan 28 17:20:53 crc kubenswrapper[4903]: I0128 17:20:53.499786 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" event={"ID":"9cc53a7e-4590-488e-a1c9-4a3f8a1baece","Type":"ContainerDied","Data":"f0c10d7cdea84a83cf9c3a3a6574455fa48fcf7749778d22cb5fdd9882abd56e"} Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.141800 4903 scope.go:117] "RemoveContainer" containerID="5719e287e19d1aada1feb99ada60f33ea5c12f2915d4d7dc5daf45de50895285" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.145154 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.309558 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-nb\") pod \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.309851 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-sb\") pod \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.310006 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-config\") pod \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.310199 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-dns-svc\") pod \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.310324 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh5zw\" (UniqueName: \"kubernetes.io/projected/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-kube-api-access-nh5zw\") pod \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\" (UID: \"9cc53a7e-4590-488e-a1c9-4a3f8a1baece\") " Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.315826 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-kube-api-access-nh5zw" (OuterVolumeSpecName: "kube-api-access-nh5zw") pod "9cc53a7e-4590-488e-a1c9-4a3f8a1baece" (UID: "9cc53a7e-4590-488e-a1c9-4a3f8a1baece"). InnerVolumeSpecName "kube-api-access-nh5zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.363461 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cc53a7e-4590-488e-a1c9-4a3f8a1baece" (UID: "9cc53a7e-4590-488e-a1c9-4a3f8a1baece"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.370597 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9cc53a7e-4590-488e-a1c9-4a3f8a1baece" (UID: "9cc53a7e-4590-488e-a1c9-4a3f8a1baece"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.413007 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.413040 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh5zw\" (UniqueName: \"kubernetes.io/projected/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-kube-api-access-nh5zw\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.413052 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.477751 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-config" (OuterVolumeSpecName: "config") pod "9cc53a7e-4590-488e-a1c9-4a3f8a1baece" (UID: "9cc53a7e-4590-488e-a1c9-4a3f8a1baece"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.511662 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.511678 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfdfff4d7-wh5xv" event={"ID":"9cc53a7e-4590-488e-a1c9-4a3f8a1baece","Type":"ContainerDied","Data":"777aa5449da9578708a471c9cf54dfa3df59a0bae52dbe9f325862888e5379ee"} Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.511740 4903 scope.go:117] "RemoveContainer" containerID="f0c10d7cdea84a83cf9c3a3a6574455fa48fcf7749778d22cb5fdd9882abd56e" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.514654 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.534081 4903 scope.go:117] "RemoveContainer" containerID="33f92d9f7e88a8cfff37e8d7f5a6a9d874903049f567bd030447e0d769163ce7" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.581115 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9cc53a7e-4590-488e-a1c9-4a3f8a1baece" (UID: "9cc53a7e-4590-488e-a1c9-4a3f8a1baece"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.616688 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cc53a7e-4590-488e-a1c9-4a3f8a1baece-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.635146 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.639660 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.859120 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfdfff4d7-wh5xv"] Jan 28 17:20:54 crc kubenswrapper[4903]: I0128 17:20:54.867686 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bfdfff4d7-wh5xv"] Jan 28 17:20:56 crc kubenswrapper[4903]: I0128 17:20:56.422906 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" path="/var/lib/kubelet/pods/9cc53a7e-4590-488e-a1c9-4a3f8a1baece/volumes" Jan 28 17:20:56 crc kubenswrapper[4903]: I0128 17:20:56.888003 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:56 crc kubenswrapper[4903]: I0128 17:20:56.888572 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5df94859fd-ftzxk" Jan 28 17:20:56 crc kubenswrapper[4903]: I0128 17:20:56.963434 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6b8999f9d6-ffjv7"] Jan 28 17:20:56 crc kubenswrapper[4903]: I0128 17:20:56.963757 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-httpd" containerID="cri-o://df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624" gracePeriod=30 Jan 28 17:20:56 crc kubenswrapper[4903]: I0128 17:20:56.964502 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-server" containerID="cri-o://01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482" gracePeriod=30 Jan 28 17:20:57 crc kubenswrapper[4903]: I0128 17:20:57.546287 4903 generic.go:334] "Generic (PLEG): container finished" podID="1c096b1b-286b-40a3-a5c2-f68189079513" containerID="df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624" exitCode=0 Jan 28 17:20:57 crc kubenswrapper[4903]: I0128 17:20:57.546759 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" event={"ID":"1c096b1b-286b-40a3-a5c2-f68189079513","Type":"ContainerDied","Data":"df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624"} Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.354884 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.521292 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-etc-swift\") pod \"1c096b1b-286b-40a3-a5c2-f68189079513\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.521440 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-combined-ca-bundle\") pod \"1c096b1b-286b-40a3-a5c2-f68189079513\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.521538 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-log-httpd\") pod \"1c096b1b-286b-40a3-a5c2-f68189079513\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.521586 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-run-httpd\") pod \"1c096b1b-286b-40a3-a5c2-f68189079513\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.521628 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9dq9\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-kube-api-access-f9dq9\") pod \"1c096b1b-286b-40a3-a5c2-f68189079513\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.521680 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-config-data\") pod \"1c096b1b-286b-40a3-a5c2-f68189079513\" (UID: \"1c096b1b-286b-40a3-a5c2-f68189079513\") " Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.522002 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1c096b1b-286b-40a3-a5c2-f68189079513" (UID: "1c096b1b-286b-40a3-a5c2-f68189079513"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.522034 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1c096b1b-286b-40a3-a5c2-f68189079513" (UID: "1c096b1b-286b-40a3-a5c2-f68189079513"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.522388 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.522411 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1c096b1b-286b-40a3-a5c2-f68189079513-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.526730 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1c096b1b-286b-40a3-a5c2-f68189079513" (UID: "1c096b1b-286b-40a3-a5c2-f68189079513"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.544288 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-kube-api-access-f9dq9" (OuterVolumeSpecName: "kube-api-access-f9dq9") pod "1c096b1b-286b-40a3-a5c2-f68189079513" (UID: "1c096b1b-286b-40a3-a5c2-f68189079513"). InnerVolumeSpecName "kube-api-access-f9dq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.559918 4903 generic.go:334] "Generic (PLEG): container finished" podID="1c096b1b-286b-40a3-a5c2-f68189079513" containerID="01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482" exitCode=0 Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.559970 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" event={"ID":"1c096b1b-286b-40a3-a5c2-f68189079513","Type":"ContainerDied","Data":"01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482"} Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.560001 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" event={"ID":"1c096b1b-286b-40a3-a5c2-f68189079513","Type":"ContainerDied","Data":"c1506ebe984fa262a726ef020add48dba0e8d1a9c15be68ca59f4d8ffdea6107"} Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.560022 4903 scope.go:117] "RemoveContainer" containerID="01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.560203 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b8999f9d6-ffjv7" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.578630 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-config-data" (OuterVolumeSpecName: "config-data") pod "1c096b1b-286b-40a3-a5c2-f68189079513" (UID: "1c096b1b-286b-40a3-a5c2-f68189079513"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.579662 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c096b1b-286b-40a3-a5c2-f68189079513" (UID: "1c096b1b-286b-40a3-a5c2-f68189079513"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.623824 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.623870 4903 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.623883 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c096b1b-286b-40a3-a5c2-f68189079513-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.623898 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9dq9\" (UniqueName: \"kubernetes.io/projected/1c096b1b-286b-40a3-a5c2-f68189079513-kube-api-access-f9dq9\") on node \"crc\" DevicePath \"\"" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.662186 4903 scope.go:117] "RemoveContainer" containerID="df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.681390 4903 scope.go:117] "RemoveContainer" containerID="01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482" Jan 28 17:20:58 crc kubenswrapper[4903]: E0128 17:20:58.682066 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482\": container with ID starting with 01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482 not found: ID does not exist" containerID="01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.682099 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482"} err="failed to get container status \"01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482\": rpc error: code = NotFound desc = could not find container \"01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482\": container with ID starting with 01e72bc198ed3ef3de92199ff3e7d653ed9b360960a30225b960df919cc06482 not found: ID does not exist" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.682121 4903 scope.go:117] "RemoveContainer" containerID="df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624" Jan 28 17:20:58 crc kubenswrapper[4903]: E0128 17:20:58.682458 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624\": container with ID starting with df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624 not found: ID does not exist" containerID="df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.682508 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624"} err="failed to get container status \"df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624\": rpc error: code = NotFound desc = could not find container \"df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624\": container with ID starting with df8a46f13780c6b5849c250f5b1bf7c21a6d86190e3f84bfce3605c42cc43624 not found: ID does not exist" Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.898590 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6b8999f9d6-ffjv7"] Jan 28 17:20:58 crc kubenswrapper[4903]: I0128 17:20:58.908570 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6b8999f9d6-ffjv7"] Jan 28 17:21:00 crc kubenswrapper[4903]: I0128 17:21:00.433728 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" path="/var/lib/kubelet/pods/1c096b1b-286b-40a3-a5c2-f68189079513/volumes" Jan 28 17:21:01 crc kubenswrapper[4903]: I0128 17:21:01.413816 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:21:01 crc kubenswrapper[4903]: E0128 17:21:01.414594 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.041094 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w9wvx"] Jan 28 17:21:03 crc kubenswrapper[4903]: E0128 17:21:03.041841 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-httpd" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.041857 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-httpd" Jan 28 17:21:03 crc kubenswrapper[4903]: E0128 17:21:03.041871 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerName="init" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.041877 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerName="init" Jan 28 17:21:03 crc kubenswrapper[4903]: E0128 17:21:03.041897 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a26f791-5856-41c5-88d3-fb91e564f8ac" containerName="swift-ring-rebalance" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.041903 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a26f791-5856-41c5-88d3-fb91e564f8ac" containerName="swift-ring-rebalance" Jan 28 17:21:03 crc kubenswrapper[4903]: E0128 17:21:03.041922 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-server" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.041928 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-server" Jan 28 17:21:03 crc kubenswrapper[4903]: E0128 17:21:03.041937 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerName="dnsmasq-dns" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.041942 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerName="dnsmasq-dns" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.042117 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-httpd" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.042135 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c096b1b-286b-40a3-a5c2-f68189079513" containerName="proxy-server" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.042146 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cc53a7e-4590-488e-a1c9-4a3f8a1baece" containerName="dnsmasq-dns" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.042160 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a26f791-5856-41c5-88d3-fb91e564f8ac" containerName="swift-ring-rebalance" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.042706 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.050135 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w9wvx"] Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.126685 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d358-account-create-update-jlj5p"] Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.127930 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.136572 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d358-account-create-update-jlj5p"] Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.140009 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.205650 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9b5f\" (UniqueName: \"kubernetes.io/projected/44e37ca5-27ba-423f-86c5-854a2119285c-kube-api-access-z9b5f\") pod \"cinder-db-create-w9wvx\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.206040 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e37ca5-27ba-423f-86c5-854a2119285c-operator-scripts\") pod \"cinder-db-create-w9wvx\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.307834 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e053efc4-84f0-4d97-a334-180738eb2791-operator-scripts\") pod \"cinder-d358-account-create-update-jlj5p\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.307886 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g987b\" (UniqueName: \"kubernetes.io/projected/e053efc4-84f0-4d97-a334-180738eb2791-kube-api-access-g987b\") pod \"cinder-d358-account-create-update-jlj5p\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.308107 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9b5f\" (UniqueName: \"kubernetes.io/projected/44e37ca5-27ba-423f-86c5-854a2119285c-kube-api-access-z9b5f\") pod \"cinder-db-create-w9wvx\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.308381 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e37ca5-27ba-423f-86c5-854a2119285c-operator-scripts\") pod \"cinder-db-create-w9wvx\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.309130 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e37ca5-27ba-423f-86c5-854a2119285c-operator-scripts\") pod \"cinder-db-create-w9wvx\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.327421 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9b5f\" (UniqueName: \"kubernetes.io/projected/44e37ca5-27ba-423f-86c5-854a2119285c-kube-api-access-z9b5f\") pod \"cinder-db-create-w9wvx\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.375084 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.410412 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e053efc4-84f0-4d97-a334-180738eb2791-operator-scripts\") pod \"cinder-d358-account-create-update-jlj5p\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.410463 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g987b\" (UniqueName: \"kubernetes.io/projected/e053efc4-84f0-4d97-a334-180738eb2791-kube-api-access-g987b\") pod \"cinder-d358-account-create-update-jlj5p\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.411422 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e053efc4-84f0-4d97-a334-180738eb2791-operator-scripts\") pod \"cinder-d358-account-create-update-jlj5p\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.433249 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g987b\" (UniqueName: \"kubernetes.io/projected/e053efc4-84f0-4d97-a334-180738eb2791-kube-api-access-g987b\") pod \"cinder-d358-account-create-update-jlj5p\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.443824 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.896906 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w9wvx"] Jan 28 17:21:03 crc kubenswrapper[4903]: I0128 17:21:03.977621 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d358-account-create-update-jlj5p"] Jan 28 17:21:03 crc kubenswrapper[4903]: W0128 17:21:03.979665 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode053efc4_84f0_4d97_a334_180738eb2791.slice/crio-a2bb66f9e187da9b7d38eeab946626431eb4ecdee4cd06874af3e776643cabbf WatchSource:0}: Error finding container a2bb66f9e187da9b7d38eeab946626431eb4ecdee4cd06874af3e776643cabbf: Status 404 returned error can't find the container with id a2bb66f9e187da9b7d38eeab946626431eb4ecdee4cd06874af3e776643cabbf Jan 28 17:21:04 crc kubenswrapper[4903]: I0128 17:21:04.623788 4903 generic.go:334] "Generic (PLEG): container finished" podID="e053efc4-84f0-4d97-a334-180738eb2791" containerID="1989441333e5947eb1c1166c9e14c17407bb506be6b65ad354406d981c98b1c7" exitCode=0 Jan 28 17:21:04 crc kubenswrapper[4903]: I0128 17:21:04.623847 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d358-account-create-update-jlj5p" event={"ID":"e053efc4-84f0-4d97-a334-180738eb2791","Type":"ContainerDied","Data":"1989441333e5947eb1c1166c9e14c17407bb506be6b65ad354406d981c98b1c7"} Jan 28 17:21:04 crc kubenswrapper[4903]: I0128 17:21:04.624153 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d358-account-create-update-jlj5p" event={"ID":"e053efc4-84f0-4d97-a334-180738eb2791","Type":"ContainerStarted","Data":"a2bb66f9e187da9b7d38eeab946626431eb4ecdee4cd06874af3e776643cabbf"} Jan 28 17:21:04 crc kubenswrapper[4903]: I0128 17:21:04.626155 4903 generic.go:334] "Generic (PLEG): container finished" podID="44e37ca5-27ba-423f-86c5-854a2119285c" containerID="a41b7015b8eecadd87d1859945ee7f5ac9da3596d91808ae673232a4788df15b" exitCode=0 Jan 28 17:21:04 crc kubenswrapper[4903]: I0128 17:21:04.626203 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9wvx" event={"ID":"44e37ca5-27ba-423f-86c5-854a2119285c","Type":"ContainerDied","Data":"a41b7015b8eecadd87d1859945ee7f5ac9da3596d91808ae673232a4788df15b"} Jan 28 17:21:04 crc kubenswrapper[4903]: I0128 17:21:04.626249 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9wvx" event={"ID":"44e37ca5-27ba-423f-86c5-854a2119285c","Type":"ContainerStarted","Data":"5173030b7624c1990f9ab16b99230e0900f92a9eacf39cbd04e8453cbadfaf06"} Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.010563 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.016132 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.161328 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e053efc4-84f0-4d97-a334-180738eb2791-operator-scripts\") pod \"e053efc4-84f0-4d97-a334-180738eb2791\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.161388 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e37ca5-27ba-423f-86c5-854a2119285c-operator-scripts\") pod \"44e37ca5-27ba-423f-86c5-854a2119285c\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.161413 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9b5f\" (UniqueName: \"kubernetes.io/projected/44e37ca5-27ba-423f-86c5-854a2119285c-kube-api-access-z9b5f\") pod \"44e37ca5-27ba-423f-86c5-854a2119285c\" (UID: \"44e37ca5-27ba-423f-86c5-854a2119285c\") " Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.161443 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g987b\" (UniqueName: \"kubernetes.io/projected/e053efc4-84f0-4d97-a334-180738eb2791-kube-api-access-g987b\") pod \"e053efc4-84f0-4d97-a334-180738eb2791\" (UID: \"e053efc4-84f0-4d97-a334-180738eb2791\") " Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.162182 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e053efc4-84f0-4d97-a334-180738eb2791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e053efc4-84f0-4d97-a334-180738eb2791" (UID: "e053efc4-84f0-4d97-a334-180738eb2791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.162256 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e37ca5-27ba-423f-86c5-854a2119285c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44e37ca5-27ba-423f-86c5-854a2119285c" (UID: "44e37ca5-27ba-423f-86c5-854a2119285c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.174766 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e053efc4-84f0-4d97-a334-180738eb2791-kube-api-access-g987b" (OuterVolumeSpecName: "kube-api-access-g987b") pod "e053efc4-84f0-4d97-a334-180738eb2791" (UID: "e053efc4-84f0-4d97-a334-180738eb2791"). InnerVolumeSpecName "kube-api-access-g987b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.174774 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e37ca5-27ba-423f-86c5-854a2119285c-kube-api-access-z9b5f" (OuterVolumeSpecName: "kube-api-access-z9b5f") pod "44e37ca5-27ba-423f-86c5-854a2119285c" (UID: "44e37ca5-27ba-423f-86c5-854a2119285c"). InnerVolumeSpecName "kube-api-access-z9b5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.263740 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e053efc4-84f0-4d97-a334-180738eb2791-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.263775 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44e37ca5-27ba-423f-86c5-854a2119285c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.263785 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9b5f\" (UniqueName: \"kubernetes.io/projected/44e37ca5-27ba-423f-86c5-854a2119285c-kube-api-access-z9b5f\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.263798 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g987b\" (UniqueName: \"kubernetes.io/projected/e053efc4-84f0-4d97-a334-180738eb2791-kube-api-access-g987b\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.644794 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w9wvx" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.644794 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w9wvx" event={"ID":"44e37ca5-27ba-423f-86c5-854a2119285c","Type":"ContainerDied","Data":"5173030b7624c1990f9ab16b99230e0900f92a9eacf39cbd04e8453cbadfaf06"} Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.645149 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5173030b7624c1990f9ab16b99230e0900f92a9eacf39cbd04e8453cbadfaf06" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.648516 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d358-account-create-update-jlj5p" event={"ID":"e053efc4-84f0-4d97-a334-180738eb2791","Type":"ContainerDied","Data":"a2bb66f9e187da9b7d38eeab946626431eb4ecdee4cd06874af3e776643cabbf"} Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.648554 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2bb66f9e187da9b7d38eeab946626431eb4ecdee4cd06874af3e776643cabbf" Jan 28 17:21:06 crc kubenswrapper[4903]: I0128 17:21:06.648761 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d358-account-create-update-jlj5p" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.368697 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n7jdx"] Jan 28 17:21:08 crc kubenswrapper[4903]: E0128 17:21:08.369025 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e37ca5-27ba-423f-86c5-854a2119285c" containerName="mariadb-database-create" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.369038 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e37ca5-27ba-423f-86c5-854a2119285c" containerName="mariadb-database-create" Jan 28 17:21:08 crc kubenswrapper[4903]: E0128 17:21:08.369066 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e053efc4-84f0-4d97-a334-180738eb2791" containerName="mariadb-account-create-update" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.369072 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e053efc4-84f0-4d97-a334-180738eb2791" containerName="mariadb-account-create-update" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.369224 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e053efc4-84f0-4d97-a334-180738eb2791" containerName="mariadb-account-create-update" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.369248 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e37ca5-27ba-423f-86c5-854a2119285c" containerName="mariadb-database-create" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.369807 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.372722 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hhbk4" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.372809 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.373025 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.387000 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n7jdx"] Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.502925 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-etc-machine-id\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.502970 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-db-sync-config-data\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.503022 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-scripts\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.503053 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-combined-ca-bundle\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.503160 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9f7p\" (UniqueName: \"kubernetes.io/projected/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-kube-api-access-m9f7p\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.503194 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-config-data\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.604802 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9f7p\" (UniqueName: \"kubernetes.io/projected/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-kube-api-access-m9f7p\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.604867 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-config-data\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.604924 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-etc-machine-id\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.604956 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-db-sync-config-data\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.605028 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-scripts\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.605064 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-combined-ca-bundle\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.605642 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-etc-machine-id\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.611471 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-config-data\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.617603 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-db-sync-config-data\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.617604 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-scripts\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.618101 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-combined-ca-bundle\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.623109 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9f7p\" (UniqueName: \"kubernetes.io/projected/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-kube-api-access-m9f7p\") pod \"cinder-db-sync-n7jdx\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:08 crc kubenswrapper[4903]: I0128 17:21:08.693988 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:09 crc kubenswrapper[4903]: I0128 17:21:09.127509 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n7jdx"] Jan 28 17:21:09 crc kubenswrapper[4903]: I0128 17:21:09.671955 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n7jdx" event={"ID":"ad604886-c21a-4d1f-bf2b-d1a9765ae9db","Type":"ContainerStarted","Data":"52952e81a03d0777b4cd697a2a94a0e47ac38da3f67d9b251071b49a49b7cf0c"} Jan 28 17:21:10 crc kubenswrapper[4903]: I0128 17:21:10.683116 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n7jdx" event={"ID":"ad604886-c21a-4d1f-bf2b-d1a9765ae9db","Type":"ContainerStarted","Data":"ec2ecd0d6532610a8091c5475dbe4cf4c0a21185e6ab7ad39ef6960bc446a65b"} Jan 28 17:21:10 crc kubenswrapper[4903]: I0128 17:21:10.700862 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n7jdx" podStartSLOduration=2.700842304 podStartE2EDuration="2.700842304s" podCreationTimestamp="2026-01-28 17:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:10.699922968 +0000 UTC m=+5742.975894489" watchObservedRunningTime="2026-01-28 17:21:10.700842304 +0000 UTC m=+5742.976813815" Jan 28 17:21:12 crc kubenswrapper[4903]: I0128 17:21:12.707004 4903 generic.go:334] "Generic (PLEG): container finished" podID="ad604886-c21a-4d1f-bf2b-d1a9765ae9db" containerID="ec2ecd0d6532610a8091c5475dbe4cf4c0a21185e6ab7ad39ef6960bc446a65b" exitCode=0 Jan 28 17:21:12 crc kubenswrapper[4903]: I0128 17:21:12.707095 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n7jdx" event={"ID":"ad604886-c21a-4d1f-bf2b-d1a9765ae9db","Type":"ContainerDied","Data":"ec2ecd0d6532610a8091c5475dbe4cf4c0a21185e6ab7ad39ef6960bc446a65b"} Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.048218 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.206967 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-etc-machine-id\") pod \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.207045 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9f7p\" (UniqueName: \"kubernetes.io/projected/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-kube-api-access-m9f7p\") pod \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.207093 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-scripts\") pod \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.207163 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-db-sync-config-data\") pod \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.207279 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-combined-ca-bundle\") pod \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.207326 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-config-data\") pod \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\" (UID: \"ad604886-c21a-4d1f-bf2b-d1a9765ae9db\") " Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.207622 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ad604886-c21a-4d1f-bf2b-d1a9765ae9db" (UID: "ad604886-c21a-4d1f-bf2b-d1a9765ae9db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.208194 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.214345 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-kube-api-access-m9f7p" (OuterVolumeSpecName: "kube-api-access-m9f7p") pod "ad604886-c21a-4d1f-bf2b-d1a9765ae9db" (UID: "ad604886-c21a-4d1f-bf2b-d1a9765ae9db"). InnerVolumeSpecName "kube-api-access-m9f7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.215111 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-scripts" (OuterVolumeSpecName: "scripts") pod "ad604886-c21a-4d1f-bf2b-d1a9765ae9db" (UID: "ad604886-c21a-4d1f-bf2b-d1a9765ae9db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.218741 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad604886-c21a-4d1f-bf2b-d1a9765ae9db" (UID: "ad604886-c21a-4d1f-bf2b-d1a9765ae9db"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.244558 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad604886-c21a-4d1f-bf2b-d1a9765ae9db" (UID: "ad604886-c21a-4d1f-bf2b-d1a9765ae9db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.260901 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-config-data" (OuterVolumeSpecName: "config-data") pod "ad604886-c21a-4d1f-bf2b-d1a9765ae9db" (UID: "ad604886-c21a-4d1f-bf2b-d1a9765ae9db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.309357 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.309758 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.309773 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9f7p\" (UniqueName: \"kubernetes.io/projected/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-kube-api-access-m9f7p\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.309783 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.309792 4903 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad604886-c21a-4d1f-bf2b-d1a9765ae9db-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.726276 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n7jdx" event={"ID":"ad604886-c21a-4d1f-bf2b-d1a9765ae9db","Type":"ContainerDied","Data":"52952e81a03d0777b4cd697a2a94a0e47ac38da3f67d9b251071b49a49b7cf0c"} Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.726331 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52952e81a03d0777b4cd697a2a94a0e47ac38da3f67d9b251071b49a49b7cf0c" Jan 28 17:21:14 crc kubenswrapper[4903]: I0128 17:21:14.726361 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n7jdx" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.061243 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4d4d8655-ngz2q"] Jan 28 17:21:15 crc kubenswrapper[4903]: E0128 17:21:15.061623 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad604886-c21a-4d1f-bf2b-d1a9765ae9db" containerName="cinder-db-sync" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.061636 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad604886-c21a-4d1f-bf2b-d1a9765ae9db" containerName="cinder-db-sync" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.061805 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad604886-c21a-4d1f-bf2b-d1a9765ae9db" containerName="cinder-db-sync" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.062656 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.082743 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4d4d8655-ngz2q"] Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.129008 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsq82\" (UniqueName: \"kubernetes.io/projected/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-kube-api-access-vsq82\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.129058 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-dns-svc\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.129098 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-sb\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.129130 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-nb\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.129162 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-config\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.228486 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.230179 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.230956 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-dns-svc\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.231043 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-sb\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.231093 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-nb\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.231147 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-config\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.231266 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsq82\" (UniqueName: \"kubernetes.io/projected/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-kube-api-access-vsq82\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.232571 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-sb\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.232572 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-nb\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.232652 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-dns-svc\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.232735 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-config\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.236692 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.236945 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.237139 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hhbk4" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.238186 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.245881 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.261051 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsq82\" (UniqueName: \"kubernetes.io/projected/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-kube-api-access-vsq82\") pod \"dnsmasq-dns-c4d4d8655-ngz2q\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.332791 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-scripts\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.332905 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f54090-0e28-4884-9a60-a3f95d9b526a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.332960 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f54090-0e28-4884-9a60-a3f95d9b526a-logs\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.333000 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbk76\" (UniqueName: \"kubernetes.io/projected/86f54090-0e28-4884-9a60-a3f95d9b526a-kube-api-access-rbk76\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.333021 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.333077 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.333107 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data-custom\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.378896 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.413189 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:21:15 crc kubenswrapper[4903]: E0128 17:21:15.413448 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.435416 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f54090-0e28-4884-9a60-a3f95d9b526a-logs\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.435517 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.435573 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbk76\" (UniqueName: \"kubernetes.io/projected/86f54090-0e28-4884-9a60-a3f95d9b526a-kube-api-access-rbk76\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.435664 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.435725 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data-custom\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.435796 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-scripts\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.435926 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f54090-0e28-4884-9a60-a3f95d9b526a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.437903 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f54090-0e28-4884-9a60-a3f95d9b526a-logs\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.442690 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data-custom\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.443751 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f54090-0e28-4884-9a60-a3f95d9b526a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.444348 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.446964 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-scripts\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.447455 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.467206 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbk76\" (UniqueName: \"kubernetes.io/projected/86f54090-0e28-4884-9a60-a3f95d9b526a-kube-api-access-rbk76\") pod \"cinder-api-0\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.551397 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:15 crc kubenswrapper[4903]: I0128 17:21:15.935431 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4d4d8655-ngz2q"] Jan 28 17:21:16 crc kubenswrapper[4903]: I0128 17:21:16.068555 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:16 crc kubenswrapper[4903]: I0128 17:21:16.782110 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86f54090-0e28-4884-9a60-a3f95d9b526a","Type":"ContainerStarted","Data":"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c"} Jan 28 17:21:16 crc kubenswrapper[4903]: I0128 17:21:16.783037 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86f54090-0e28-4884-9a60-a3f95d9b526a","Type":"ContainerStarted","Data":"a87282a4dec030e926c0d5a6f84549488e29b77c57131af69b96c1ebc73b4b1e"} Jan 28 17:21:16 crc kubenswrapper[4903]: I0128 17:21:16.783863 4903 generic.go:334] "Generic (PLEG): container finished" podID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerID="a44c804c296864054cd8db400ba626cb9766b3e10c61f4cbb5b3917e194928d9" exitCode=0 Jan 28 17:21:16 crc kubenswrapper[4903]: I0128 17:21:16.783927 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" event={"ID":"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc","Type":"ContainerDied","Data":"a44c804c296864054cd8db400ba626cb9766b3e10c61f4cbb5b3917e194928d9"} Jan 28 17:21:16 crc kubenswrapper[4903]: I0128 17:21:16.783969 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" event={"ID":"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc","Type":"ContainerStarted","Data":"9ddbca2e26bf7bcbe35bf0ca93dd244588314219c6a8eb51306dfd25cd5aa633"} Jan 28 17:21:17 crc kubenswrapper[4903]: I0128 17:21:17.647151 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:17 crc kubenswrapper[4903]: I0128 17:21:17.794644 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86f54090-0e28-4884-9a60-a3f95d9b526a","Type":"ContainerStarted","Data":"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea"} Jan 28 17:21:17 crc kubenswrapper[4903]: I0128 17:21:17.794792 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 17:21:17 crc kubenswrapper[4903]: I0128 17:21:17.797460 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" event={"ID":"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc","Type":"ContainerStarted","Data":"a017cb6b0c626f509ada0f5d236ebd87c2cc43bf6c3959e482b1da506fc80d64"} Jan 28 17:21:17 crc kubenswrapper[4903]: I0128 17:21:17.797672 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:17 crc kubenswrapper[4903]: I0128 17:21:17.824811 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.824787134 podStartE2EDuration="2.824787134s" podCreationTimestamp="2026-01-28 17:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:17.81576786 +0000 UTC m=+5750.091739391" watchObservedRunningTime="2026-01-28 17:21:17.824787134 +0000 UTC m=+5750.100758655" Jan 28 17:21:17 crc kubenswrapper[4903]: I0128 17:21:17.860204 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" podStartSLOduration=2.860188454 podStartE2EDuration="2.860188454s" podCreationTimestamp="2026-01-28 17:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:17.85668637 +0000 UTC m=+5750.132657891" watchObservedRunningTime="2026-01-28 17:21:17.860188454 +0000 UTC m=+5750.136159965" Jan 28 17:21:18 crc kubenswrapper[4903]: I0128 17:21:18.804558 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api-log" containerID="cri-o://6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c" gracePeriod=30 Jan 28 17:21:18 crc kubenswrapper[4903]: I0128 17:21:18.805057 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api" containerID="cri-o://79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea" gracePeriod=30 Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.489270 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.542137 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f54090-0e28-4884-9a60-a3f95d9b526a-etc-machine-id\") pod \"86f54090-0e28-4884-9a60-a3f95d9b526a\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.542239 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f54090-0e28-4884-9a60-a3f95d9b526a-logs\") pod \"86f54090-0e28-4884-9a60-a3f95d9b526a\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.542410 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-combined-ca-bundle\") pod \"86f54090-0e28-4884-9a60-a3f95d9b526a\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.542448 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data\") pod \"86f54090-0e28-4884-9a60-a3f95d9b526a\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.542470 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data-custom\") pod \"86f54090-0e28-4884-9a60-a3f95d9b526a\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.542511 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-scripts\") pod \"86f54090-0e28-4884-9a60-a3f95d9b526a\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.542559 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbk76\" (UniqueName: \"kubernetes.io/projected/86f54090-0e28-4884-9a60-a3f95d9b526a-kube-api-access-rbk76\") pod \"86f54090-0e28-4884-9a60-a3f95d9b526a\" (UID: \"86f54090-0e28-4884-9a60-a3f95d9b526a\") " Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.543420 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f54090-0e28-4884-9a60-a3f95d9b526a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86f54090-0e28-4884-9a60-a3f95d9b526a" (UID: "86f54090-0e28-4884-9a60-a3f95d9b526a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.543771 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86f54090-0e28-4884-9a60-a3f95d9b526a-logs" (OuterVolumeSpecName: "logs") pod "86f54090-0e28-4884-9a60-a3f95d9b526a" (UID: "86f54090-0e28-4884-9a60-a3f95d9b526a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.553764 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-scripts" (OuterVolumeSpecName: "scripts") pod "86f54090-0e28-4884-9a60-a3f95d9b526a" (UID: "86f54090-0e28-4884-9a60-a3f95d9b526a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.553811 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86f54090-0e28-4884-9a60-a3f95d9b526a" (UID: "86f54090-0e28-4884-9a60-a3f95d9b526a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.553981 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f54090-0e28-4884-9a60-a3f95d9b526a-kube-api-access-rbk76" (OuterVolumeSpecName: "kube-api-access-rbk76") pod "86f54090-0e28-4884-9a60-a3f95d9b526a" (UID: "86f54090-0e28-4884-9a60-a3f95d9b526a"). InnerVolumeSpecName "kube-api-access-rbk76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.571364 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86f54090-0e28-4884-9a60-a3f95d9b526a" (UID: "86f54090-0e28-4884-9a60-a3f95d9b526a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.608044 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data" (OuterVolumeSpecName: "config-data") pod "86f54090-0e28-4884-9a60-a3f95d9b526a" (UID: "86f54090-0e28-4884-9a60-a3f95d9b526a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.643639 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.643672 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.643684 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.643694 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f54090-0e28-4884-9a60-a3f95d9b526a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.643706 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbk76\" (UniqueName: \"kubernetes.io/projected/86f54090-0e28-4884-9a60-a3f95d9b526a-kube-api-access-rbk76\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.643720 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86f54090-0e28-4884-9a60-a3f95d9b526a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.643733 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f54090-0e28-4884-9a60-a3f95d9b526a-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.820169 4903 generic.go:334] "Generic (PLEG): container finished" podID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerID="79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea" exitCode=0 Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.820207 4903 generic.go:334] "Generic (PLEG): container finished" podID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerID="6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c" exitCode=143 Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.820211 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86f54090-0e28-4884-9a60-a3f95d9b526a","Type":"ContainerDied","Data":"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea"} Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.820242 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.820260 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86f54090-0e28-4884-9a60-a3f95d9b526a","Type":"ContainerDied","Data":"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c"} Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.820276 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86f54090-0e28-4884-9a60-a3f95d9b526a","Type":"ContainerDied","Data":"a87282a4dec030e926c0d5a6f84549488e29b77c57131af69b96c1ebc73b4b1e"} Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.820295 4903 scope.go:117] "RemoveContainer" containerID="79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.842975 4903 scope.go:117] "RemoveContainer" containerID="6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.857173 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.864836 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.870065 4903 scope.go:117] "RemoveContainer" containerID="79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea" Jan 28 17:21:19 crc kubenswrapper[4903]: E0128 17:21:19.870652 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea\": container with ID starting with 79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea not found: ID does not exist" containerID="79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.870743 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea"} err="failed to get container status \"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea\": rpc error: code = NotFound desc = could not find container \"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea\": container with ID starting with 79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea not found: ID does not exist" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.870770 4903 scope.go:117] "RemoveContainer" containerID="6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c" Jan 28 17:21:19 crc kubenswrapper[4903]: E0128 17:21:19.872248 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c\": container with ID starting with 6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c not found: ID does not exist" containerID="6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.872355 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c"} err="failed to get container status \"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c\": rpc error: code = NotFound desc = could not find container \"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c\": container with ID starting with 6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c not found: ID does not exist" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.872387 4903 scope.go:117] "RemoveContainer" containerID="79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.872721 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea"} err="failed to get container status \"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea\": rpc error: code = NotFound desc = could not find container \"79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea\": container with ID starting with 79d9785ac4223e9147bfd7830ed249c5557f4c962474710ee5169a770139faea not found: ID does not exist" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.872740 4903 scope.go:117] "RemoveContainer" containerID="6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.873048 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c"} err="failed to get container status \"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c\": rpc error: code = NotFound desc = could not find container \"6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c\": container with ID starting with 6d6b20a88048a89bf709384489bd89d50077e49cbcc4332376d075509fa9363c not found: ID does not exist" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.884875 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:19 crc kubenswrapper[4903]: E0128 17:21:19.885347 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.885371 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api" Jan 28 17:21:19 crc kubenswrapper[4903]: E0128 17:21:19.885383 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api-log" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.885392 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api-log" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.886488 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.886518 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" containerName="cinder-api-log" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.891001 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.893257 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hhbk4" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.893564 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.893596 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.893664 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.893709 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.893838 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.898873 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950101 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-628t4\" (UniqueName: \"kubernetes.io/projected/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-kube-api-access-628t4\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950151 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950186 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950604 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950658 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950685 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data-custom\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950790 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-scripts\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950879 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:19 crc kubenswrapper[4903]: I0128 17:21:19.950985 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-logs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.052725 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-scripts\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.052798 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.052840 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-logs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.052888 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-628t4\" (UniqueName: \"kubernetes.io/projected/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-kube-api-access-628t4\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.052912 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.052951 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.053041 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.053062 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.053080 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data-custom\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.054509 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-logs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.054601 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.057244 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.057366 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-scripts\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.058054 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.058206 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.058818 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data-custom\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.062116 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.074663 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-628t4\" (UniqueName: \"kubernetes.io/projected/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-kube-api-access-628t4\") pod \"cinder-api-0\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.217398 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.444338 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f54090-0e28-4884-9a60-a3f95d9b526a" path="/var/lib/kubelet/pods/86f54090-0e28-4884-9a60-a3f95d9b526a/volumes" Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.668672 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:20 crc kubenswrapper[4903]: I0128 17:21:20.832926 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a","Type":"ContainerStarted","Data":"3f1f974beea413b00f709e0e05ff446fc4f09c3f2be9aa006aa3ce5e6616a8e7"} Jan 28 17:21:21 crc kubenswrapper[4903]: I0128 17:21:21.842338 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a","Type":"ContainerStarted","Data":"9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e"} Jan 28 17:21:21 crc kubenswrapper[4903]: I0128 17:21:21.842716 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a","Type":"ContainerStarted","Data":"d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9"} Jan 28 17:21:21 crc kubenswrapper[4903]: I0128 17:21:21.843745 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 17:21:21 crc kubenswrapper[4903]: I0128 17:21:21.870766 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.8707278929999998 podStartE2EDuration="2.870727893s" podCreationTimestamp="2026-01-28 17:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:21.860732842 +0000 UTC m=+5754.136704363" watchObservedRunningTime="2026-01-28 17:21:21.870727893 +0000 UTC m=+5754.146699414" Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.380759 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.457930 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd66ff975-qnwdt"] Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.458156 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" podUID="f719d804-3532-4619-b702-61e91ff99905" containerName="dnsmasq-dns" containerID="cri-o://8c51c6d0adc0b7bd2e1b3a4932a79291c6b97b683805065523aedfe04c911b7b" gracePeriod=10 Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.882510 4903 generic.go:334] "Generic (PLEG): container finished" podID="f719d804-3532-4619-b702-61e91ff99905" containerID="8c51c6d0adc0b7bd2e1b3a4932a79291c6b97b683805065523aedfe04c911b7b" exitCode=0 Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.882560 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" event={"ID":"f719d804-3532-4619-b702-61e91ff99905","Type":"ContainerDied","Data":"8c51c6d0adc0b7bd2e1b3a4932a79291c6b97b683805065523aedfe04c911b7b"} Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.883138 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" event={"ID":"f719d804-3532-4619-b702-61e91ff99905","Type":"ContainerDied","Data":"f5d0a0deac77fba402be181f5afa5d8d597d961a187f35d7112e695cdb688138"} Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.883160 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d0a0deac77fba402be181f5afa5d8d597d961a187f35d7112e695cdb688138" Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.941616 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.977939 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-nb\") pod \"f719d804-3532-4619-b702-61e91ff99905\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.978023 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-dns-svc\") pod \"f719d804-3532-4619-b702-61e91ff99905\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.978116 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-config\") pod \"f719d804-3532-4619-b702-61e91ff99905\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.978158 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhq6v\" (UniqueName: \"kubernetes.io/projected/f719d804-3532-4619-b702-61e91ff99905-kube-api-access-hhq6v\") pod \"f719d804-3532-4619-b702-61e91ff99905\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.978183 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-sb\") pod \"f719d804-3532-4619-b702-61e91ff99905\" (UID: \"f719d804-3532-4619-b702-61e91ff99905\") " Jan 28 17:21:25 crc kubenswrapper[4903]: I0128 17:21:25.985643 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f719d804-3532-4619-b702-61e91ff99905-kube-api-access-hhq6v" (OuterVolumeSpecName: "kube-api-access-hhq6v") pod "f719d804-3532-4619-b702-61e91ff99905" (UID: "f719d804-3532-4619-b702-61e91ff99905"). InnerVolumeSpecName "kube-api-access-hhq6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.035931 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-config" (OuterVolumeSpecName: "config") pod "f719d804-3532-4619-b702-61e91ff99905" (UID: "f719d804-3532-4619-b702-61e91ff99905"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.045094 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f719d804-3532-4619-b702-61e91ff99905" (UID: "f719d804-3532-4619-b702-61e91ff99905"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.045147 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f719d804-3532-4619-b702-61e91ff99905" (UID: "f719d804-3532-4619-b702-61e91ff99905"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.046389 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f719d804-3532-4619-b702-61e91ff99905" (UID: "f719d804-3532-4619-b702-61e91ff99905"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.094777 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.094815 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.094825 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.094857 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhq6v\" (UniqueName: \"kubernetes.io/projected/f719d804-3532-4619-b702-61e91ff99905-kube-api-access-hhq6v\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.094868 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f719d804-3532-4619-b702-61e91ff99905-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.413900 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:21:26 crc kubenswrapper[4903]: E0128 17:21:26.414458 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.889194 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd66ff975-qnwdt" Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.909374 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd66ff975-qnwdt"] Jan 28 17:21:26 crc kubenswrapper[4903]: I0128 17:21:26.916382 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd66ff975-qnwdt"] Jan 28 17:21:28 crc kubenswrapper[4903]: I0128 17:21:28.429139 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f719d804-3532-4619-b702-61e91ff99905" path="/var/lib/kubelet/pods/f719d804-3532-4619-b702-61e91ff99905/volumes" Jan 28 17:21:32 crc kubenswrapper[4903]: I0128 17:21:32.251996 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 17:21:39 crc kubenswrapper[4903]: I0128 17:21:39.413817 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:21:39 crc kubenswrapper[4903]: E0128 17:21:39.414555 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.028215 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:21:49 crc kubenswrapper[4903]: E0128 17:21:49.029046 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f719d804-3532-4619-b702-61e91ff99905" containerName="init" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.029061 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f719d804-3532-4619-b702-61e91ff99905" containerName="init" Jan 28 17:21:49 crc kubenswrapper[4903]: E0128 17:21:49.029098 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f719d804-3532-4619-b702-61e91ff99905" containerName="dnsmasq-dns" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.029105 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f719d804-3532-4619-b702-61e91ff99905" containerName="dnsmasq-dns" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.029252 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f719d804-3532-4619-b702-61e91ff99905" containerName="dnsmasq-dns" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.030148 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.032904 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.049257 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.140964 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.141065 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.141167 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.141444 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drq9x\" (UniqueName: \"kubernetes.io/projected/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-kube-api-access-drq9x\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.141594 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.141691 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.242925 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.243030 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.243061 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.243136 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drq9x\" (UniqueName: \"kubernetes.io/projected/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-kube-api-access-drq9x\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.243169 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.243201 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.243648 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.249428 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.249433 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.251036 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-scripts\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.255787 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.268639 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drq9x\" (UniqueName: \"kubernetes.io/projected/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-kube-api-access-drq9x\") pod \"cinder-scheduler-0\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.350756 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 17:21:49 crc kubenswrapper[4903]: I0128 17:21:49.826052 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:21:50 crc kubenswrapper[4903]: I0128 17:21:50.078674 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5","Type":"ContainerStarted","Data":"2c3e3957621809c4d7fb7714bd11257d20ff741ded4999aad1c095344407d169"} Jan 28 17:21:50 crc kubenswrapper[4903]: I0128 17:21:50.253063 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:50 crc kubenswrapper[4903]: I0128 17:21:50.253370 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api-log" containerID="cri-o://d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9" gracePeriod=30 Jan 28 17:21:50 crc kubenswrapper[4903]: I0128 17:21:50.253525 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api" containerID="cri-o://9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e" gracePeriod=30 Jan 28 17:21:50 crc kubenswrapper[4903]: E0128 17:21:50.479855 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71bcdfcd_fcaf_4d5d_a1a0_a8b08f8bdf0a.slice/crio-conmon-d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9.scope\": RecentStats: unable to find data in memory cache]" Jan 28 17:21:51 crc kubenswrapper[4903]: I0128 17:21:51.093053 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5","Type":"ContainerStarted","Data":"5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118"} Jan 28 17:21:51 crc kubenswrapper[4903]: I0128 17:21:51.093418 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5","Type":"ContainerStarted","Data":"388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72"} Jan 28 17:21:51 crc kubenswrapper[4903]: I0128 17:21:51.104940 4903 generic.go:334] "Generic (PLEG): container finished" podID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerID="d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9" exitCode=143 Jan 28 17:21:51 crc kubenswrapper[4903]: I0128 17:21:51.105000 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a","Type":"ContainerDied","Data":"d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9"} Jan 28 17:21:51 crc kubenswrapper[4903]: I0128 17:21:51.115928 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.115905252 podStartE2EDuration="2.115905252s" podCreationTimestamp="2026-01-28 17:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:51.112273413 +0000 UTC m=+5783.388244944" watchObservedRunningTime="2026-01-28 17:21:51.115905252 +0000 UTC m=+5783.391876763" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.835823 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.930756 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-etc-machine-id\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.930841 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.930927 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.930985 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-628t4\" (UniqueName: \"kubernetes.io/projected/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-kube-api-access-628t4\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931073 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-logs\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931140 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-combined-ca-bundle\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931211 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-public-tls-certs\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931244 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-scripts\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931326 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-internal-tls-certs\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931446 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data-custom\") pod \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\" (UID: \"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a\") " Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931977 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.931983 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-logs" (OuterVolumeSpecName: "logs") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.941791 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-kube-api-access-628t4" (OuterVolumeSpecName: "kube-api-access-628t4") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "kube-api-access-628t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.943637 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-scripts" (OuterVolumeSpecName: "scripts") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.951721 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.965634 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.991794 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:53 crc kubenswrapper[4903]: I0128 17:21:53.995654 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data" (OuterVolumeSpecName: "config-data") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.004094 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" (UID: "71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.033975 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.034023 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-628t4\" (UniqueName: \"kubernetes.io/projected/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-kube-api-access-628t4\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.034040 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.034051 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.034066 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.034078 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.034087 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.034097 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.138800 4903 generic.go:334] "Generic (PLEG): container finished" podID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerID="9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e" exitCode=0 Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.138832 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a","Type":"ContainerDied","Data":"9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e"} Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.138885 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a","Type":"ContainerDied","Data":"3f1f974beea413b00f709e0e05ff446fc4f09c3f2be9aa006aa3ce5e6616a8e7"} Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.138896 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.138908 4903 scope.go:117] "RemoveContainer" containerID="9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.160540 4903 scope.go:117] "RemoveContainer" containerID="d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.181177 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.193764 4903 scope.go:117] "RemoveContainer" containerID="9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e" Jan 28 17:21:54 crc kubenswrapper[4903]: E0128 17:21:54.195701 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e\": container with ID starting with 9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e not found: ID does not exist" containerID="9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.195748 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e"} err="failed to get container status \"9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e\": rpc error: code = NotFound desc = could not find container \"9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e\": container with ID starting with 9eadadd524180db0f015f3040701e0b534bfe1aaa11c6e49ee0a0f8f295bfb9e not found: ID does not exist" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.195777 4903 scope.go:117] "RemoveContainer" containerID="d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.195858 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:54 crc kubenswrapper[4903]: E0128 17:21:54.196414 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9\": container with ID starting with d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9 not found: ID does not exist" containerID="d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.196446 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9"} err="failed to get container status \"d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9\": rpc error: code = NotFound desc = could not find container \"d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9\": container with ID starting with d71b07b90c7a5ce3268356ea10b4458ef2b026c9e59e5cfa14b7eadb4c82cfe9 not found: ID does not exist" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.207319 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:54 crc kubenswrapper[4903]: E0128 17:21:54.207817 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api-log" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.207841 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api-log" Jan 28 17:21:54 crc kubenswrapper[4903]: E0128 17:21:54.207868 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.207917 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.208114 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api-log" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.208184 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" containerName="cinder-api" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.209276 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.216697 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.223014 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.223428 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.223426 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.252722 4903 scope.go:117] "RemoveContainer" containerID="090676cfe480499ffeecdf09b5a8d71dc7cb59cd1f1766425eb51f5826208e8c" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.271444 4903 scope.go:117] "RemoveContainer" containerID="ac7adb019d5a19fee0a814ce5780886268d62bceacd8f0c2daaa9a6f1d868dea" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339349 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339392 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-scripts\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339418 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-config-data\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339506 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e55f6947-3db9-4547-b5f9-41546693bf3d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339576 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339596 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339622 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn4q\" (UniqueName: \"kubernetes.io/projected/e55f6947-3db9-4547-b5f9-41546693bf3d-kube-api-access-jmn4q\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339687 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f6947-3db9-4547-b5f9-41546693bf3d-logs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.339740 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.351743 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.413824 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:21:54 crc kubenswrapper[4903]: E0128 17:21:54.414022 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.423381 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a" path="/var/lib/kubelet/pods/71bcdfcd-fcaf-4d5d-a1a0-a8b08f8bdf0a/volumes" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441323 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441378 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441426 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmn4q\" (UniqueName: \"kubernetes.io/projected/e55f6947-3db9-4547-b5f9-41546693bf3d-kube-api-access-jmn4q\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441464 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f6947-3db9-4547-b5f9-41546693bf3d-logs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441519 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441629 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441645 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-scripts\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441659 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-config-data\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441688 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e55f6947-3db9-4547-b5f9-41546693bf3d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.441766 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e55f6947-3db9-4547-b5f9-41546693bf3d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.442880 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f6947-3db9-4547-b5f9-41546693bf3d-logs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.444861 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.445483 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.446083 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-scripts\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.446232 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.446649 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.448669 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f6947-3db9-4547-b5f9-41546693bf3d-config-data\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.459633 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmn4q\" (UniqueName: \"kubernetes.io/projected/e55f6947-3db9-4547-b5f9-41546693bf3d-kube-api-access-jmn4q\") pod \"cinder-api-0\" (UID: \"e55f6947-3db9-4547-b5f9-41546693bf3d\") " pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.539430 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 17:21:54 crc kubenswrapper[4903]: W0128 17:21:54.987835 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode55f6947_3db9_4547_b5f9_41546693bf3d.slice/crio-a81438c58ef3b14da72a9b4945a4b6c2fa8d4ac06031d6d629142e1522f9c6f9 WatchSource:0}: Error finding container a81438c58ef3b14da72a9b4945a4b6c2fa8d4ac06031d6d629142e1522f9c6f9: Status 404 returned error can't find the container with id a81438c58ef3b14da72a9b4945a4b6c2fa8d4ac06031d6d629142e1522f9c6f9 Jan 28 17:21:54 crc kubenswrapper[4903]: I0128 17:21:54.993584 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 17:21:55 crc kubenswrapper[4903]: I0128 17:21:55.147412 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e55f6947-3db9-4547-b5f9-41546693bf3d","Type":"ContainerStarted","Data":"a81438c58ef3b14da72a9b4945a4b6c2fa8d4ac06031d6d629142e1522f9c6f9"} Jan 28 17:21:56 crc kubenswrapper[4903]: I0128 17:21:56.162674 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e55f6947-3db9-4547-b5f9-41546693bf3d","Type":"ContainerStarted","Data":"8ce816db39c7a998bb8b7b9e478cb2e501d36f4392528157dc81acb3b49ee0ee"} Jan 28 17:21:56 crc kubenswrapper[4903]: I0128 17:21:56.163099 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 17:21:56 crc kubenswrapper[4903]: I0128 17:21:56.163113 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e55f6947-3db9-4547-b5f9-41546693bf3d","Type":"ContainerStarted","Data":"b34b8dc4c48718cb12f873c7286db4dee0f1530d37dd6066ed6c158fc30ec06d"} Jan 28 17:21:56 crc kubenswrapper[4903]: I0128 17:21:56.196489 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.196466466 podStartE2EDuration="2.196466466s" podCreationTimestamp="2026-01-28 17:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:21:56.184910981 +0000 UTC m=+5788.460882512" watchObservedRunningTime="2026-01-28 17:21:56.196466466 +0000 UTC m=+5788.472437977" Jan 28 17:21:59 crc kubenswrapper[4903]: I0128 17:21:59.564963 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 17:21:59 crc kubenswrapper[4903]: I0128 17:21:59.625102 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:22:00 crc kubenswrapper[4903]: I0128 17:22:00.199962 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="cinder-scheduler" containerID="cri-o://388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72" gracePeriod=30 Jan 28 17:22:00 crc kubenswrapper[4903]: I0128 17:22:00.200029 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="probe" containerID="cri-o://5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118" gracePeriod=30 Jan 28 17:22:01 crc kubenswrapper[4903]: I0128 17:22:01.213382 4903 generic.go:334] "Generic (PLEG): container finished" podID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerID="5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118" exitCode=0 Jan 28 17:22:01 crc kubenswrapper[4903]: I0128 17:22:01.213452 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5","Type":"ContainerDied","Data":"5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118"} Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.133128 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.225641 4903 generic.go:334] "Generic (PLEG): container finished" podID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerID="388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72" exitCode=0 Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.225718 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5","Type":"ContainerDied","Data":"388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72"} Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.225767 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5","Type":"ContainerDied","Data":"2c3e3957621809c4d7fb7714bd11257d20ff741ded4999aad1c095344407d169"} Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.225795 4903 scope.go:117] "RemoveContainer" containerID="5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.225723 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.252795 4903 scope.go:117] "RemoveContainer" containerID="388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.275839 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-etc-machine-id\") pod \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.275959 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-scripts\") pod \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.275982 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" (UID: "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.276031 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drq9x\" (UniqueName: \"kubernetes.io/projected/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-kube-api-access-drq9x\") pod \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.276056 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data\") pod \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.276091 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data-custom\") pod \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.276119 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-combined-ca-bundle\") pod \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\" (UID: \"4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5\") " Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.277487 4903 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.283903 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-scripts" (OuterVolumeSpecName: "scripts") pod "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" (UID: "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.283913 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-kube-api-access-drq9x" (OuterVolumeSpecName: "kube-api-access-drq9x") pod "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" (UID: "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5"). InnerVolumeSpecName "kube-api-access-drq9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.286833 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" (UID: "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.287850 4903 scope.go:117] "RemoveContainer" containerID="5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118" Jan 28 17:22:02 crc kubenswrapper[4903]: E0128 17:22:02.291987 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118\": container with ID starting with 5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118 not found: ID does not exist" containerID="5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.292044 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118"} err="failed to get container status \"5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118\": rpc error: code = NotFound desc = could not find container \"5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118\": container with ID starting with 5676983b6f6a93e4302c6cc49e884243d4d279048483809e051e34257c953118 not found: ID does not exist" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.292073 4903 scope.go:117] "RemoveContainer" containerID="388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72" Jan 28 17:22:02 crc kubenswrapper[4903]: E0128 17:22:02.293404 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72\": container with ID starting with 388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72 not found: ID does not exist" containerID="388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.293443 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72"} err="failed to get container status \"388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72\": rpc error: code = NotFound desc = could not find container \"388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72\": container with ID starting with 388a7b1c59d88ad812e077426917f8df492834fa6345290356271a91aa673e72 not found: ID does not exist" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.330312 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" (UID: "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.378868 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.378904 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drq9x\" (UniqueName: \"kubernetes.io/projected/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-kube-api-access-drq9x\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.378914 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.378923 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.397634 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data" (OuterVolumeSpecName: "config-data") pod "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" (UID: "4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.480334 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.556315 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.568647 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.582389 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:22:02 crc kubenswrapper[4903]: E0128 17:22:02.583100 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="probe" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.583191 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="probe" Jan 28 17:22:02 crc kubenswrapper[4903]: E0128 17:22:02.583311 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="cinder-scheduler" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.583395 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="cinder-scheduler" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.583749 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="probe" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.583844 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" containerName="cinder-scheduler" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.584817 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.593110 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.601731 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.686262 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-config-data\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.686396 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhxs\" (UniqueName: \"kubernetes.io/projected/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-kube-api-access-tlhxs\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.686430 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.686479 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.686552 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.686641 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-scripts\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.788237 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-scripts\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.788365 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-config-data\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.788414 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlhxs\" (UniqueName: \"kubernetes.io/projected/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-kube-api-access-tlhxs\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.788436 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.788479 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.788544 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.789625 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.793464 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.793917 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.794303 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-config-data\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.796098 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-scripts\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.805763 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlhxs\" (UniqueName: \"kubernetes.io/projected/d339f5b6-a314-48f4-bd94-7a8b0e6c9f02-kube-api-access-tlhxs\") pod \"cinder-scheduler-0\" (UID: \"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02\") " pod="openstack/cinder-scheduler-0" Jan 28 17:22:02 crc kubenswrapper[4903]: I0128 17:22:02.905808 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 17:22:03 crc kubenswrapper[4903]: I0128 17:22:03.369168 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 17:22:04 crc kubenswrapper[4903]: I0128 17:22:04.279546 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02","Type":"ContainerStarted","Data":"5b9db2336c47c92dd0b0aeee656fba75e1b8cd17613af205a896e569f00e3c07"} Jan 28 17:22:04 crc kubenswrapper[4903]: I0128 17:22:04.280544 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02","Type":"ContainerStarted","Data":"836bca9a2c3335a7850a0e80cdd442d3f34d286609edf423bef719c237aca704"} Jan 28 17:22:04 crc kubenswrapper[4903]: I0128 17:22:04.426780 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5" path="/var/lib/kubelet/pods/4e2e0ec8-27fd-4c69-9cbb-02bb1f7ea1d5/volumes" Jan 28 17:22:05 crc kubenswrapper[4903]: I0128 17:22:05.289990 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d339f5b6-a314-48f4-bd94-7a8b0e6c9f02","Type":"ContainerStarted","Data":"1589c450fc60df81a7a607ea268f1c8b41836de0932e5c717dd7034928e49d89"} Jan 28 17:22:05 crc kubenswrapper[4903]: I0128 17:22:05.340878 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.340854262 podStartE2EDuration="3.340854262s" podCreationTimestamp="2026-01-28 17:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:05.336960227 +0000 UTC m=+5797.612931748" watchObservedRunningTime="2026-01-28 17:22:05.340854262 +0000 UTC m=+5797.616825773" Jan 28 17:22:06 crc kubenswrapper[4903]: I0128 17:22:06.417350 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:22:06 crc kubenswrapper[4903]: E0128 17:22:06.418000 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:22:06 crc kubenswrapper[4903]: I0128 17:22:06.702651 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 17:22:07 crc kubenswrapper[4903]: I0128 17:22:07.905918 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 17:22:13 crc kubenswrapper[4903]: I0128 17:22:13.119017 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.175786 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mbrnw"] Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.178414 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.187548 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mbrnw"] Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.284288 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f454-account-create-update-jvd5w"] Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.285657 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.295141 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f454-account-create-update-jvd5w"] Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.295829 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.345435 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b390eb3b-8f83-451c-8979-f640f892f3bd-operator-scripts\") pod \"glance-db-create-mbrnw\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.346284 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6rx\" (UniqueName: \"kubernetes.io/projected/b390eb3b-8f83-451c-8979-f640f892f3bd-kube-api-access-nb6rx\") pod \"glance-db-create-mbrnw\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.447522 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ab7609-a704-4e48-bf27-52b61fca6c7d-operator-scripts\") pod \"glance-f454-account-create-update-jvd5w\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.447810 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb6rx\" (UniqueName: \"kubernetes.io/projected/b390eb3b-8f83-451c-8979-f640f892f3bd-kube-api-access-nb6rx\") pod \"glance-db-create-mbrnw\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.447892 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpknb\" (UniqueName: \"kubernetes.io/projected/79ab7609-a704-4e48-bf27-52b61fca6c7d-kube-api-access-mpknb\") pod \"glance-f454-account-create-update-jvd5w\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.447988 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b390eb3b-8f83-451c-8979-f640f892f3bd-operator-scripts\") pod \"glance-db-create-mbrnw\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.448835 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b390eb3b-8f83-451c-8979-f640f892f3bd-operator-scripts\") pod \"glance-db-create-mbrnw\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.467228 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb6rx\" (UniqueName: \"kubernetes.io/projected/b390eb3b-8f83-451c-8979-f640f892f3bd-kube-api-access-nb6rx\") pod \"glance-db-create-mbrnw\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.510427 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.549497 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ab7609-a704-4e48-bf27-52b61fca6c7d-operator-scripts\") pod \"glance-f454-account-create-update-jvd5w\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.549657 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpknb\" (UniqueName: \"kubernetes.io/projected/79ab7609-a704-4e48-bf27-52b61fca6c7d-kube-api-access-mpknb\") pod \"glance-f454-account-create-update-jvd5w\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.550267 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ab7609-a704-4e48-bf27-52b61fca6c7d-operator-scripts\") pod \"glance-f454-account-create-update-jvd5w\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.567687 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpknb\" (UniqueName: \"kubernetes.io/projected/79ab7609-a704-4e48-bf27-52b61fca6c7d-kube-api-access-mpknb\") pod \"glance-f454-account-create-update-jvd5w\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.602299 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:16 crc kubenswrapper[4903]: I0128 17:22:16.955872 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mbrnw"] Jan 28 17:22:16 crc kubenswrapper[4903]: W0128 17:22:16.959356 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb390eb3b_8f83_451c_8979_f640f892f3bd.slice/crio-0e494d70a4a4de3cc4c138ab119e3ec589041453beee783749b2da12d6a56f29 WatchSource:0}: Error finding container 0e494d70a4a4de3cc4c138ab119e3ec589041453beee783749b2da12d6a56f29: Status 404 returned error can't find the container with id 0e494d70a4a4de3cc4c138ab119e3ec589041453beee783749b2da12d6a56f29 Jan 28 17:22:17 crc kubenswrapper[4903]: I0128 17:22:17.051898 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f454-account-create-update-jvd5w"] Jan 28 17:22:17 crc kubenswrapper[4903]: W0128 17:22:17.056572 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79ab7609_a704_4e48_bf27_52b61fca6c7d.slice/crio-cdd3c9adb1dd4dff1ad223b5c5b6d3acdcd0575e2e072e275b7d76a2d8ab41ee WatchSource:0}: Error finding container cdd3c9adb1dd4dff1ad223b5c5b6d3acdcd0575e2e072e275b7d76a2d8ab41ee: Status 404 returned error can't find the container with id cdd3c9adb1dd4dff1ad223b5c5b6d3acdcd0575e2e072e275b7d76a2d8ab41ee Jan 28 17:22:17 crc kubenswrapper[4903]: I0128 17:22:17.393917 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mbrnw" event={"ID":"b390eb3b-8f83-451c-8979-f640f892f3bd","Type":"ContainerStarted","Data":"64a34c6a61409b1cb98fc19f36014022c36d221c9cbdec582937b1b90eb2bf5a"} Jan 28 17:22:17 crc kubenswrapper[4903]: I0128 17:22:17.393994 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mbrnw" event={"ID":"b390eb3b-8f83-451c-8979-f640f892f3bd","Type":"ContainerStarted","Data":"0e494d70a4a4de3cc4c138ab119e3ec589041453beee783749b2da12d6a56f29"} Jan 28 17:22:17 crc kubenswrapper[4903]: I0128 17:22:17.397928 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f454-account-create-update-jvd5w" event={"ID":"79ab7609-a704-4e48-bf27-52b61fca6c7d","Type":"ContainerStarted","Data":"a820c6f442d684793903714792e1d16b8db9334b1b03f4de0bbfc3ad32602fcf"} Jan 28 17:22:17 crc kubenswrapper[4903]: I0128 17:22:17.398010 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f454-account-create-update-jvd5w" event={"ID":"79ab7609-a704-4e48-bf27-52b61fca6c7d","Type":"ContainerStarted","Data":"cdd3c9adb1dd4dff1ad223b5c5b6d3acdcd0575e2e072e275b7d76a2d8ab41ee"} Jan 28 17:22:17 crc kubenswrapper[4903]: I0128 17:22:17.437412 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-mbrnw" podStartSLOduration=1.437382406 podStartE2EDuration="1.437382406s" podCreationTimestamp="2026-01-28 17:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:17.419056979 +0000 UTC m=+5809.695028490" watchObservedRunningTime="2026-01-28 17:22:17.437382406 +0000 UTC m=+5809.713353917" Jan 28 17:22:17 crc kubenswrapper[4903]: I0128 17:22:17.441705 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f454-account-create-update-jvd5w" podStartSLOduration=1.441664892 podStartE2EDuration="1.441664892s" podCreationTimestamp="2026-01-28 17:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:17.43272187 +0000 UTC m=+5809.708693381" watchObservedRunningTime="2026-01-28 17:22:17.441664892 +0000 UTC m=+5809.717636403" Jan 28 17:22:18 crc kubenswrapper[4903]: I0128 17:22:18.412051 4903 generic.go:334] "Generic (PLEG): container finished" podID="b390eb3b-8f83-451c-8979-f640f892f3bd" containerID="64a34c6a61409b1cb98fc19f36014022c36d221c9cbdec582937b1b90eb2bf5a" exitCode=0 Jan 28 17:22:18 crc kubenswrapper[4903]: I0128 17:22:18.412304 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mbrnw" event={"ID":"b390eb3b-8f83-451c-8979-f640f892f3bd","Type":"ContainerDied","Data":"64a34c6a61409b1cb98fc19f36014022c36d221c9cbdec582937b1b90eb2bf5a"} Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.421275 4903 generic.go:334] "Generic (PLEG): container finished" podID="79ab7609-a704-4e48-bf27-52b61fca6c7d" containerID="a820c6f442d684793903714792e1d16b8db9334b1b03f4de0bbfc3ad32602fcf" exitCode=0 Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.421327 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f454-account-create-update-jvd5w" event={"ID":"79ab7609-a704-4e48-bf27-52b61fca6c7d","Type":"ContainerDied","Data":"a820c6f442d684793903714792e1d16b8db9334b1b03f4de0bbfc3ad32602fcf"} Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.744769 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.818771 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb6rx\" (UniqueName: \"kubernetes.io/projected/b390eb3b-8f83-451c-8979-f640f892f3bd-kube-api-access-nb6rx\") pod \"b390eb3b-8f83-451c-8979-f640f892f3bd\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.818942 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b390eb3b-8f83-451c-8979-f640f892f3bd-operator-scripts\") pod \"b390eb3b-8f83-451c-8979-f640f892f3bd\" (UID: \"b390eb3b-8f83-451c-8979-f640f892f3bd\") " Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.819830 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b390eb3b-8f83-451c-8979-f640f892f3bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b390eb3b-8f83-451c-8979-f640f892f3bd" (UID: "b390eb3b-8f83-451c-8979-f640f892f3bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.827882 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b390eb3b-8f83-451c-8979-f640f892f3bd-kube-api-access-nb6rx" (OuterVolumeSpecName: "kube-api-access-nb6rx") pod "b390eb3b-8f83-451c-8979-f640f892f3bd" (UID: "b390eb3b-8f83-451c-8979-f640f892f3bd"). InnerVolumeSpecName "kube-api-access-nb6rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.921075 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb6rx\" (UniqueName: \"kubernetes.io/projected/b390eb3b-8f83-451c-8979-f640f892f3bd-kube-api-access-nb6rx\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:19 crc kubenswrapper[4903]: I0128 17:22:19.921419 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b390eb3b-8f83-451c-8979-f640f892f3bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.415412 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:22:20 crc kubenswrapper[4903]: E0128 17:22:20.415682 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.429761 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mbrnw" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.429763 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mbrnw" event={"ID":"b390eb3b-8f83-451c-8979-f640f892f3bd","Type":"ContainerDied","Data":"0e494d70a4a4de3cc4c138ab119e3ec589041453beee783749b2da12d6a56f29"} Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.431225 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e494d70a4a4de3cc4c138ab119e3ec589041453beee783749b2da12d6a56f29" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.790944 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.839013 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpknb\" (UniqueName: \"kubernetes.io/projected/79ab7609-a704-4e48-bf27-52b61fca6c7d-kube-api-access-mpknb\") pod \"79ab7609-a704-4e48-bf27-52b61fca6c7d\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.839182 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ab7609-a704-4e48-bf27-52b61fca6c7d-operator-scripts\") pod \"79ab7609-a704-4e48-bf27-52b61fca6c7d\" (UID: \"79ab7609-a704-4e48-bf27-52b61fca6c7d\") " Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.840031 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ab7609-a704-4e48-bf27-52b61fca6c7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79ab7609-a704-4e48-bf27-52b61fca6c7d" (UID: "79ab7609-a704-4e48-bf27-52b61fca6c7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.844390 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ab7609-a704-4e48-bf27-52b61fca6c7d-kube-api-access-mpknb" (OuterVolumeSpecName: "kube-api-access-mpknb") pod "79ab7609-a704-4e48-bf27-52b61fca6c7d" (UID: "79ab7609-a704-4e48-bf27-52b61fca6c7d"). InnerVolumeSpecName "kube-api-access-mpknb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.941939 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ab7609-a704-4e48-bf27-52b61fca6c7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:20 crc kubenswrapper[4903]: I0128 17:22:20.941991 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpknb\" (UniqueName: \"kubernetes.io/projected/79ab7609-a704-4e48-bf27-52b61fca6c7d-kube-api-access-mpknb\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:21 crc kubenswrapper[4903]: I0128 17:22:21.438434 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f454-account-create-update-jvd5w" event={"ID":"79ab7609-a704-4e48-bf27-52b61fca6c7d","Type":"ContainerDied","Data":"cdd3c9adb1dd4dff1ad223b5c5b6d3acdcd0575e2e072e275b7d76a2d8ab41ee"} Jan 28 17:22:21 crc kubenswrapper[4903]: I0128 17:22:21.438472 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd3c9adb1dd4dff1ad223b5c5b6d3acdcd0575e2e072e275b7d76a2d8ab41ee" Jan 28 17:22:21 crc kubenswrapper[4903]: I0128 17:22:21.438516 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f454-account-create-update-jvd5w" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.380455 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jkxxx"] Jan 28 17:22:26 crc kubenswrapper[4903]: E0128 17:22:26.382176 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ab7609-a704-4e48-bf27-52b61fca6c7d" containerName="mariadb-account-create-update" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.382201 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ab7609-a704-4e48-bf27-52b61fca6c7d" containerName="mariadb-account-create-update" Jan 28 17:22:26 crc kubenswrapper[4903]: E0128 17:22:26.382300 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b390eb3b-8f83-451c-8979-f640f892f3bd" containerName="mariadb-database-create" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.382313 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b390eb3b-8f83-451c-8979-f640f892f3bd" containerName="mariadb-database-create" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.382898 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ab7609-a704-4e48-bf27-52b61fca6c7d" containerName="mariadb-account-create-update" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.382980 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b390eb3b-8f83-451c-8979-f640f892f3bd" containerName="mariadb-database-create" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.384278 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.391286 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.392496 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-256p4" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.436457 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jkxxx"] Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.441206 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-db-sync-config-data\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.441254 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-config-data\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.441389 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-combined-ca-bundle\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.441450 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9j5b\" (UniqueName: \"kubernetes.io/projected/c85c2276-594c-411a-a241-d17a6b2efe28-kube-api-access-j9j5b\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.542953 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-combined-ca-bundle\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.543024 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9j5b\" (UniqueName: \"kubernetes.io/projected/c85c2276-594c-411a-a241-d17a6b2efe28-kube-api-access-j9j5b\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.543125 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-db-sync-config-data\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.543143 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-config-data\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.549142 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-config-data\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.549750 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-combined-ca-bundle\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.553446 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-db-sync-config-data\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.560281 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9j5b\" (UniqueName: \"kubernetes.io/projected/c85c2276-594c-411a-a241-d17a6b2efe28-kube-api-access-j9j5b\") pod \"glance-db-sync-jkxxx\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:26 crc kubenswrapper[4903]: I0128 17:22:26.713001 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:27 crc kubenswrapper[4903]: I0128 17:22:27.247510 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jkxxx"] Jan 28 17:22:27 crc kubenswrapper[4903]: I0128 17:22:27.487639 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jkxxx" event={"ID":"c85c2276-594c-411a-a241-d17a6b2efe28","Type":"ContainerStarted","Data":"2f639a4beaf87f6f12491541594dd0c6a4066efab69c4754f88b38a09d5ea7e8"} Jan 28 17:22:28 crc kubenswrapper[4903]: I0128 17:22:28.496968 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jkxxx" event={"ID":"c85c2276-594c-411a-a241-d17a6b2efe28","Type":"ContainerStarted","Data":"99fb111aef44fa4b9ba708bbc206771ad43ca4bbdf8f2fdc51c89273f326e1c6"} Jan 28 17:22:28 crc kubenswrapper[4903]: I0128 17:22:28.530596 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jkxxx" podStartSLOduration=2.530575715 podStartE2EDuration="2.530575715s" podCreationTimestamp="2026-01-28 17:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:28.523826961 +0000 UTC m=+5820.799798472" watchObservedRunningTime="2026-01-28 17:22:28.530575715 +0000 UTC m=+5820.806547226" Jan 28 17:22:31 crc kubenswrapper[4903]: I0128 17:22:31.527270 4903 generic.go:334] "Generic (PLEG): container finished" podID="c85c2276-594c-411a-a241-d17a6b2efe28" containerID="99fb111aef44fa4b9ba708bbc206771ad43ca4bbdf8f2fdc51c89273f326e1c6" exitCode=0 Jan 28 17:22:31 crc kubenswrapper[4903]: I0128 17:22:31.527331 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jkxxx" event={"ID":"c85c2276-594c-411a-a241-d17a6b2efe28","Type":"ContainerDied","Data":"99fb111aef44fa4b9ba708bbc206771ad43ca4bbdf8f2fdc51c89273f326e1c6"} Jan 28 17:22:32 crc kubenswrapper[4903]: I0128 17:22:32.413638 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:22:32 crc kubenswrapper[4903]: E0128 17:22:32.414112 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.120928 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.267559 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-combined-ca-bundle\") pod \"c85c2276-594c-411a-a241-d17a6b2efe28\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.267721 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-config-data\") pod \"c85c2276-594c-411a-a241-d17a6b2efe28\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.267764 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-db-sync-config-data\") pod \"c85c2276-594c-411a-a241-d17a6b2efe28\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.267811 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9j5b\" (UniqueName: \"kubernetes.io/projected/c85c2276-594c-411a-a241-d17a6b2efe28-kube-api-access-j9j5b\") pod \"c85c2276-594c-411a-a241-d17a6b2efe28\" (UID: \"c85c2276-594c-411a-a241-d17a6b2efe28\") " Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.273019 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85c2276-594c-411a-a241-d17a6b2efe28-kube-api-access-j9j5b" (OuterVolumeSpecName: "kube-api-access-j9j5b") pod "c85c2276-594c-411a-a241-d17a6b2efe28" (UID: "c85c2276-594c-411a-a241-d17a6b2efe28"). InnerVolumeSpecName "kube-api-access-j9j5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.273486 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c85c2276-594c-411a-a241-d17a6b2efe28" (UID: "c85c2276-594c-411a-a241-d17a6b2efe28"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.293052 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c85c2276-594c-411a-a241-d17a6b2efe28" (UID: "c85c2276-594c-411a-a241-d17a6b2efe28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.317383 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-config-data" (OuterVolumeSpecName: "config-data") pod "c85c2276-594c-411a-a241-d17a6b2efe28" (UID: "c85c2276-594c-411a-a241-d17a6b2efe28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.370266 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.370305 4903 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.370338 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9j5b\" (UniqueName: \"kubernetes.io/projected/c85c2276-594c-411a-a241-d17a6b2efe28-kube-api-access-j9j5b\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.370351 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c2276-594c-411a-a241-d17a6b2efe28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.547633 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jkxxx" event={"ID":"c85c2276-594c-411a-a241-d17a6b2efe28","Type":"ContainerDied","Data":"2f639a4beaf87f6f12491541594dd0c6a4066efab69c4754f88b38a09d5ea7e8"} Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.547674 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f639a4beaf87f6f12491541594dd0c6a4066efab69c4754f88b38a09d5ea7e8" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.547748 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jkxxx" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.893341 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:33 crc kubenswrapper[4903]: E0128 17:22:33.893709 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85c2276-594c-411a-a241-d17a6b2efe28" containerName="glance-db-sync" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.893722 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85c2276-594c-411a-a241-d17a6b2efe28" containerName="glance-db-sync" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.893893 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85c2276-594c-411a-a241-d17a6b2efe28" containerName="glance-db-sync" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.895611 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.898411 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.898622 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.898805 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-256p4" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.915672 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-665b6fb647-htl8h"] Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.917046 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.929850 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.951565 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-665b6fb647-htl8h"] Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.981812 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-dns-svc\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.981863 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.981893 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.981969 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-sb\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.982011 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-logs\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.982040 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.982082 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glf4\" (UniqueName: \"kubernetes.io/projected/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-kube-api-access-2glf4\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.982117 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-config\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.982164 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.982190 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4s6m\" (UniqueName: \"kubernetes.io/projected/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-kube-api-access-v4s6m\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:33 crc kubenswrapper[4903]: I0128 17:22:33.982207 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-nb\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.085934 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-sb\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086071 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-logs\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086099 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086152 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2glf4\" (UniqueName: \"kubernetes.io/projected/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-kube-api-access-2glf4\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086191 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-config\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086246 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086275 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4s6m\" (UniqueName: \"kubernetes.io/projected/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-kube-api-access-v4s6m\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086297 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-nb\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086605 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-logs\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.086732 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.087063 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-sb\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.087310 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-config\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.087359 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-nb\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.087502 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-dns-svc\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.087587 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.087617 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.087637 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.088226 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-dns-svc\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.089512 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.093026 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.093785 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.095345 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-scripts\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.098240 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-config-data\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.115888 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4s6m\" (UniqueName: \"kubernetes.io/projected/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-kube-api-access-v4s6m\") pod \"glance-default-external-api-0\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.117717 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glf4\" (UniqueName: \"kubernetes.io/projected/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-kube-api-access-2glf4\") pod \"dnsmasq-dns-665b6fb647-htl8h\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.135241 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.189482 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.189569 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.189695 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.189866 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnp9\" (UniqueName: \"kubernetes.io/projected/85b05934-6a3f-4065-b1b2-e89aadb075f1-kube-api-access-rrnp9\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.189940 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.190025 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.220317 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.236882 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.291550 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrnp9\" (UniqueName: \"kubernetes.io/projected/85b05934-6a3f-4065-b1b2-e89aadb075f1-kube-api-access-rrnp9\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.291611 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.291646 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.291724 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.291744 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.291773 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.292409 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-logs\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.292651 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.298936 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.305296 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.313105 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.318395 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrnp9\" (UniqueName: \"kubernetes.io/projected/85b05934-6a3f-4065-b1b2-e89aadb075f1-kube-api-access-rrnp9\") pod \"glance-default-internal-api-0\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.491135 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.831220 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-665b6fb647-htl8h"] Jan 28 17:22:34 crc kubenswrapper[4903]: W0128 17:22:34.855522 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea17e1e1_bcb8_4684_b86f_2bcef3e72d19.slice/crio-6c583586a85fe397280528ee8ad0c489b580ba72a8106731dfef3ad569d15047 WatchSource:0}: Error finding container 6c583586a85fe397280528ee8ad0c489b580ba72a8106731dfef3ad569d15047: Status 404 returned error can't find the container with id 6c583586a85fe397280528ee8ad0c489b580ba72a8106731dfef3ad569d15047 Jan 28 17:22:34 crc kubenswrapper[4903]: I0128 17:22:34.859138 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:35 crc kubenswrapper[4903]: I0128 17:22:35.059749 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:35 crc kubenswrapper[4903]: W0128 17:22:35.076727 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85b05934_6a3f_4065_b1b2_e89aadb075f1.slice/crio-1172ea6731ee6bff9cc46992f971c9cec913db1a4b6d39379fc7f06668d76819 WatchSource:0}: Error finding container 1172ea6731ee6bff9cc46992f971c9cec913db1a4b6d39379fc7f06668d76819: Status 404 returned error can't find the container with id 1172ea6731ee6bff9cc46992f971c9cec913db1a4b6d39379fc7f06668d76819 Jan 28 17:22:35 crc kubenswrapper[4903]: I0128 17:22:35.265339 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:35 crc kubenswrapper[4903]: I0128 17:22:35.581426 4903 generic.go:334] "Generic (PLEG): container finished" podID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerID="53592c7ec4a67ce99e9d73a00d8e95198e278a9324c810f0a38795cac229ed02" exitCode=0 Jan 28 17:22:35 crc kubenswrapper[4903]: I0128 17:22:35.582342 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" event={"ID":"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e","Type":"ContainerDied","Data":"53592c7ec4a67ce99e9d73a00d8e95198e278a9324c810f0a38795cac229ed02"} Jan 28 17:22:35 crc kubenswrapper[4903]: I0128 17:22:35.582453 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" event={"ID":"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e","Type":"ContainerStarted","Data":"6f9d0e3507111bd073123ae4e1f81ce5961574d4d98be4be0b007aa949cad3c8"} Jan 28 17:22:35 crc kubenswrapper[4903]: I0128 17:22:35.585771 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19","Type":"ContainerStarted","Data":"6c583586a85fe397280528ee8ad0c489b580ba72a8106731dfef3ad569d15047"} Jan 28 17:22:35 crc kubenswrapper[4903]: I0128 17:22:35.587715 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85b05934-6a3f-4065-b1b2-e89aadb075f1","Type":"ContainerStarted","Data":"1172ea6731ee6bff9cc46992f971c9cec913db1a4b6d39379fc7f06668d76819"} Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.585292 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.613070 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85b05934-6a3f-4065-b1b2-e89aadb075f1","Type":"ContainerStarted","Data":"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b"} Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.616905 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" event={"ID":"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e","Type":"ContainerStarted","Data":"81be15fdd8343214a566a7022156eaf6e27036b3320ea1267e253277beb74449"} Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.618218 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.622717 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19","Type":"ContainerStarted","Data":"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c"} Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.622765 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19","Type":"ContainerStarted","Data":"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44"} Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.622893 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-log" containerID="cri-o://5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44" gracePeriod=30 Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.623013 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-httpd" containerID="cri-o://cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c" gracePeriod=30 Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.663241 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.6632223870000002 podStartE2EDuration="3.663222387s" podCreationTimestamp="2026-01-28 17:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:36.659375653 +0000 UTC m=+5828.935347164" watchObservedRunningTime="2026-01-28 17:22:36.663222387 +0000 UTC m=+5828.939193898" Jan 28 17:22:36 crc kubenswrapper[4903]: I0128 17:22:36.664210 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" podStartSLOduration=3.664199344 podStartE2EDuration="3.664199344s" podCreationTimestamp="2026-01-28 17:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:36.637861249 +0000 UTC m=+5828.913832780" watchObservedRunningTime="2026-01-28 17:22:36.664199344 +0000 UTC m=+5828.940170875" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.188302 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.276723 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-scripts\") pod \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.276784 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-combined-ca-bundle\") pod \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.276844 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-httpd-run\") pod \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.276915 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-config-data\") pod \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.277016 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4s6m\" (UniqueName: \"kubernetes.io/projected/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-kube-api-access-v4s6m\") pod \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.277049 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-logs\") pod \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\" (UID: \"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19\") " Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.277403 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" (UID: "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.277712 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-logs" (OuterVolumeSpecName: "logs") pod "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" (UID: "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.282766 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-scripts" (OuterVolumeSpecName: "scripts") pod "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" (UID: "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.282871 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-kube-api-access-v4s6m" (OuterVolumeSpecName: "kube-api-access-v4s6m") pod "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" (UID: "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19"). InnerVolumeSpecName "kube-api-access-v4s6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.303078 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" (UID: "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.341097 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-config-data" (OuterVolumeSpecName: "config-data") pod "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" (UID: "ea17e1e1-bcb8-4684-b86f-2bcef3e72d19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.379602 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4s6m\" (UniqueName: \"kubernetes.io/projected/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-kube-api-access-v4s6m\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.379647 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.379662 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.379674 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.379686 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.379697 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.632039 4903 generic.go:334] "Generic (PLEG): container finished" podID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerID="cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c" exitCode=143 Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.632068 4903 generic.go:334] "Generic (PLEG): container finished" podID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerID="5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44" exitCode=143 Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.632114 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19","Type":"ContainerDied","Data":"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c"} Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.632143 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19","Type":"ContainerDied","Data":"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44"} Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.632152 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ea17e1e1-bcb8-4684-b86f-2bcef3e72d19","Type":"ContainerDied","Data":"6c583586a85fe397280528ee8ad0c489b580ba72a8106731dfef3ad569d15047"} Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.632180 4903 scope.go:117] "RemoveContainer" containerID="cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.632189 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.634852 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85b05934-6a3f-4065-b1b2-e89aadb075f1","Type":"ContainerStarted","Data":"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554"} Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.635055 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-log" containerID="cri-o://f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b" gracePeriod=30 Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.635086 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-httpd" containerID="cri-o://3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554" gracePeriod=30 Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.656202 4903 scope.go:117] "RemoveContainer" containerID="5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.676808 4903 scope.go:117] "RemoveContainer" containerID="cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c" Jan 28 17:22:37 crc kubenswrapper[4903]: E0128 17:22:37.685304 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c\": container with ID starting with cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c not found: ID does not exist" containerID="cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.685346 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c"} err="failed to get container status \"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c\": rpc error: code = NotFound desc = could not find container \"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c\": container with ID starting with cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c not found: ID does not exist" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.685369 4903 scope.go:117] "RemoveContainer" containerID="5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.685461 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.685440046 podStartE2EDuration="3.685440046s" podCreationTimestamp="2026-01-28 17:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:37.667345905 +0000 UTC m=+5829.943317426" watchObservedRunningTime="2026-01-28 17:22:37.685440046 +0000 UTC m=+5829.961411557" Jan 28 17:22:37 crc kubenswrapper[4903]: E0128 17:22:37.686467 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44\": container with ID starting with 5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44 not found: ID does not exist" containerID="5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.686498 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44"} err="failed to get container status \"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44\": rpc error: code = NotFound desc = could not find container \"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44\": container with ID starting with 5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44 not found: ID does not exist" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.686513 4903 scope.go:117] "RemoveContainer" containerID="cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.686933 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c"} err="failed to get container status \"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c\": rpc error: code = NotFound desc = could not find container \"cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c\": container with ID starting with cda1d1abc5e5e3aa2fba52148caf10be6bfb7f824d2afc9494224b742b0d256c not found: ID does not exist" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.686955 4903 scope.go:117] "RemoveContainer" containerID="5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.687358 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44"} err="failed to get container status \"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44\": rpc error: code = NotFound desc = could not find container \"5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44\": container with ID starting with 5a4966ecfdf765aad925d4750fac3099690023a0405b2b91b21252e8cd75cb44 not found: ID does not exist" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.697556 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.711816 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.725012 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:37 crc kubenswrapper[4903]: E0128 17:22:37.725454 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-httpd" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.725474 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-httpd" Jan 28 17:22:37 crc kubenswrapper[4903]: E0128 17:22:37.725516 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-log" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.725540 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-log" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.725764 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-log" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.725786 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" containerName="glance-httpd" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.726882 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.738945 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.739226 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.747560 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.787142 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.787202 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-scripts\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.787226 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.787285 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.787344 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-config-data\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.787364 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krtn2\" (UniqueName: \"kubernetes.io/projected/a81328fb-7a19-4672-babf-bde845e899aa-kube-api-access-krtn2\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.787395 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-logs\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.888412 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-scripts\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.888474 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.888563 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.888623 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-config-data\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.888647 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krtn2\" (UniqueName: \"kubernetes.io/projected/a81328fb-7a19-4672-babf-bde845e899aa-kube-api-access-krtn2\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.888679 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-logs\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.888714 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.889248 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.889589 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-logs\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.892474 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.892745 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.893711 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-scripts\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.894427 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-config-data\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:37 crc kubenswrapper[4903]: I0128 17:22:37.905050 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krtn2\" (UniqueName: \"kubernetes.io/projected/a81328fb-7a19-4672-babf-bde845e899aa-kube-api-access-krtn2\") pod \"glance-default-external-api-0\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " pod="openstack/glance-default-external-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.047974 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.373788 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.407887 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrnp9\" (UniqueName: \"kubernetes.io/projected/85b05934-6a3f-4065-b1b2-e89aadb075f1-kube-api-access-rrnp9\") pod \"85b05934-6a3f-4065-b1b2-e89aadb075f1\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.408000 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-logs\") pod \"85b05934-6a3f-4065-b1b2-e89aadb075f1\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.408066 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-scripts\") pod \"85b05934-6a3f-4065-b1b2-e89aadb075f1\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.408095 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-combined-ca-bundle\") pod \"85b05934-6a3f-4065-b1b2-e89aadb075f1\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.408133 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-config-data\") pod \"85b05934-6a3f-4065-b1b2-e89aadb075f1\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.408214 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-httpd-run\") pod \"85b05934-6a3f-4065-b1b2-e89aadb075f1\" (UID: \"85b05934-6a3f-4065-b1b2-e89aadb075f1\") " Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.408963 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85b05934-6a3f-4065-b1b2-e89aadb075f1" (UID: "85b05934-6a3f-4065-b1b2-e89aadb075f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.409010 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-logs" (OuterVolumeSpecName: "logs") pod "85b05934-6a3f-4065-b1b2-e89aadb075f1" (UID: "85b05934-6a3f-4065-b1b2-e89aadb075f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.409487 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.409513 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85b05934-6a3f-4065-b1b2-e89aadb075f1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.426472 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-scripts" (OuterVolumeSpecName: "scripts") pod "85b05934-6a3f-4065-b1b2-e89aadb075f1" (UID: "85b05934-6a3f-4065-b1b2-e89aadb075f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.427133 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b05934-6a3f-4065-b1b2-e89aadb075f1-kube-api-access-rrnp9" (OuterVolumeSpecName: "kube-api-access-rrnp9") pod "85b05934-6a3f-4065-b1b2-e89aadb075f1" (UID: "85b05934-6a3f-4065-b1b2-e89aadb075f1"). InnerVolumeSpecName "kube-api-access-rrnp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.432828 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea17e1e1-bcb8-4684-b86f-2bcef3e72d19" path="/var/lib/kubelet/pods/ea17e1e1-bcb8-4684-b86f-2bcef3e72d19/volumes" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.454386 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85b05934-6a3f-4065-b1b2-e89aadb075f1" (UID: "85b05934-6a3f-4065-b1b2-e89aadb075f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.469219 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-config-data" (OuterVolumeSpecName: "config-data") pod "85b05934-6a3f-4065-b1b2-e89aadb075f1" (UID: "85b05934-6a3f-4065-b1b2-e89aadb075f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.511576 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrnp9\" (UniqueName: \"kubernetes.io/projected/85b05934-6a3f-4065-b1b2-e89aadb075f1-kube-api-access-rrnp9\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.511621 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.511632 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.511644 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85b05934-6a3f-4065-b1b2-e89aadb075f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.637328 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.645216 4903 generic.go:334] "Generic (PLEG): container finished" podID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerID="3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554" exitCode=0 Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.645321 4903 generic.go:334] "Generic (PLEG): container finished" podID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerID="f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b" exitCode=143 Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.645328 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.645366 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85b05934-6a3f-4065-b1b2-e89aadb075f1","Type":"ContainerDied","Data":"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554"} Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.645410 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85b05934-6a3f-4065-b1b2-e89aadb075f1","Type":"ContainerDied","Data":"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b"} Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.645424 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85b05934-6a3f-4065-b1b2-e89aadb075f1","Type":"ContainerDied","Data":"1172ea6731ee6bff9cc46992f971c9cec913db1a4b6d39379fc7f06668d76819"} Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.645442 4903 scope.go:117] "RemoveContainer" containerID="3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554" Jan 28 17:22:38 crc kubenswrapper[4903]: W0128 17:22:38.646704 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda81328fb_7a19_4672_babf_bde845e899aa.slice/crio-6a60f35b023844968a9cd62be112b15e125f6a0c5d10d7cb06a062f6f68bbd89 WatchSource:0}: Error finding container 6a60f35b023844968a9cd62be112b15e125f6a0c5d10d7cb06a062f6f68bbd89: Status 404 returned error can't find the container with id 6a60f35b023844968a9cd62be112b15e125f6a0c5d10d7cb06a062f6f68bbd89 Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.700869 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.709586 4903 scope.go:117] "RemoveContainer" containerID="f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.719249 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.732199 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:38 crc kubenswrapper[4903]: E0128 17:22:38.733074 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-log" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.733104 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-log" Jan 28 17:22:38 crc kubenswrapper[4903]: E0128 17:22:38.733133 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-httpd" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.733142 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-httpd" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.733842 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-httpd" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.733875 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" containerName="glance-log" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.736219 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.739152 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.739261 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.742290 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.749784 4903 scope.go:117] "RemoveContainer" containerID="3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554" Jan 28 17:22:38 crc kubenswrapper[4903]: E0128 17:22:38.753090 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554\": container with ID starting with 3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554 not found: ID does not exist" containerID="3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.753138 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554"} err="failed to get container status \"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554\": rpc error: code = NotFound desc = could not find container \"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554\": container with ID starting with 3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554 not found: ID does not exist" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.753163 4903 scope.go:117] "RemoveContainer" containerID="f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b" Jan 28 17:22:38 crc kubenswrapper[4903]: E0128 17:22:38.753886 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b\": container with ID starting with f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b not found: ID does not exist" containerID="f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.753939 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b"} err="failed to get container status \"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b\": rpc error: code = NotFound desc = could not find container \"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b\": container with ID starting with f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b not found: ID does not exist" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.753960 4903 scope.go:117] "RemoveContainer" containerID="3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.755096 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554"} err="failed to get container status \"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554\": rpc error: code = NotFound desc = could not find container \"3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554\": container with ID starting with 3100e1122e52281d25e499a1cf4c1e0301d8c7dbb28f07d16fdf164fb0009554 not found: ID does not exist" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.755147 4903 scope.go:117] "RemoveContainer" containerID="f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.761123 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b"} err="failed to get container status \"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b\": rpc error: code = NotFound desc = could not find container \"f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b\": container with ID starting with f0bd6b5d094e607ae4b1eaaa0a28583f6dafbf44aec16d0fa93b838dcb4dcc5b not found: ID does not exist" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.817561 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.817845 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.817871 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-logs\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.817916 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrjd\" (UniqueName: \"kubernetes.io/projected/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-kube-api-access-gsrjd\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.817939 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.818002 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.818024 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.920956 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.921013 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.921039 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-logs\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.921091 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrjd\" (UniqueName: \"kubernetes.io/projected/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-kube-api-access-gsrjd\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.921113 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.921130 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.921148 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.922208 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.923142 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-logs\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.926075 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.926089 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.926588 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.927911 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:38 crc kubenswrapper[4903]: I0128 17:22:38.938663 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrjd\" (UniqueName: \"kubernetes.io/projected/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-kube-api-access-gsrjd\") pod \"glance-default-internal-api-0\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:22:39 crc kubenswrapper[4903]: I0128 17:22:39.060951 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:39 crc kubenswrapper[4903]: I0128 17:22:39.697255 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a81328fb-7a19-4672-babf-bde845e899aa","Type":"ContainerStarted","Data":"4ec1705d34eec6bd3c07a4c78cde506dc751984da4d0202d23843d6f55983b6f"} Jan 28 17:22:39 crc kubenswrapper[4903]: I0128 17:22:39.697651 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a81328fb-7a19-4672-babf-bde845e899aa","Type":"ContainerStarted","Data":"6a60f35b023844968a9cd62be112b15e125f6a0c5d10d7cb06a062f6f68bbd89"} Jan 28 17:22:39 crc kubenswrapper[4903]: I0128 17:22:39.797628 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:22:40 crc kubenswrapper[4903]: I0128 17:22:40.430227 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b05934-6a3f-4065-b1b2-e89aadb075f1" path="/var/lib/kubelet/pods/85b05934-6a3f-4065-b1b2-e89aadb075f1/volumes" Jan 28 17:22:40 crc kubenswrapper[4903]: I0128 17:22:40.719300 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a81328fb-7a19-4672-babf-bde845e899aa","Type":"ContainerStarted","Data":"c1fca57c96424f6bd69443030714c72aa7c068520efa1215ab657585fd64c242"} Jan 28 17:22:40 crc kubenswrapper[4903]: I0128 17:22:40.721637 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"656574bf-0d38-4b5c-b93d-ea7d83da6ff6","Type":"ContainerStarted","Data":"328ff5d9baa13164bc2ccaca95ffb90e49a2af512f2d4bf932eaabf8d61010b4"} Jan 28 17:22:40 crc kubenswrapper[4903]: I0128 17:22:40.721680 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"656574bf-0d38-4b5c-b93d-ea7d83da6ff6","Type":"ContainerStarted","Data":"c4ebb508ccd38fc9e5eebb7830f8429c62e0432c734e411162e953df99b05465"} Jan 28 17:22:40 crc kubenswrapper[4903]: I0128 17:22:40.745594 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.743496437 podStartE2EDuration="3.743496437s" podCreationTimestamp="2026-01-28 17:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:40.735670474 +0000 UTC m=+5833.011642015" watchObservedRunningTime="2026-01-28 17:22:40.743496437 +0000 UTC m=+5833.019467948" Jan 28 17:22:41 crc kubenswrapper[4903]: I0128 17:22:41.734917 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"656574bf-0d38-4b5c-b93d-ea7d83da6ff6","Type":"ContainerStarted","Data":"2ccbcd4dbbd7109a4248162bb50d9eeff6410f3e480fce1f9ffffe6ab0c94753"} Jan 28 17:22:41 crc kubenswrapper[4903]: I0128 17:22:41.766412 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.766380103 podStartE2EDuration="3.766380103s" podCreationTimestamp="2026-01-28 17:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:22:41.762865218 +0000 UTC m=+5834.038836749" watchObservedRunningTime="2026-01-28 17:22:41.766380103 +0000 UTC m=+5834.042351614" Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.239833 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.309152 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4d4d8655-ngz2q"] Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.309399 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" podUID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerName="dnsmasq-dns" containerID="cri-o://a017cb6b0c626f509ada0f5d236ebd87c2cc43bf6c3959e482b1da506fc80d64" gracePeriod=10 Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.768117 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" event={"ID":"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc","Type":"ContainerDied","Data":"a017cb6b0c626f509ada0f5d236ebd87c2cc43bf6c3959e482b1da506fc80d64"} Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.768041 4903 generic.go:334] "Generic (PLEG): container finished" podID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerID="a017cb6b0c626f509ada0f5d236ebd87c2cc43bf6c3959e482b1da506fc80d64" exitCode=0 Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.870153 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.938025 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-config\") pod \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.938088 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-sb\") pod \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.938190 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-nb\") pod \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.938248 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsq82\" (UniqueName: \"kubernetes.io/projected/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-kube-api-access-vsq82\") pod \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.938403 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-dns-svc\") pod \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\" (UID: \"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc\") " Jan 28 17:22:44 crc kubenswrapper[4903]: I0128 17:22:44.949940 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-kube-api-access-vsq82" (OuterVolumeSpecName: "kube-api-access-vsq82") pod "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" (UID: "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc"). InnerVolumeSpecName "kube-api-access-vsq82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.010243 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" (UID: "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.017246 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" (UID: "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.019185 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-config" (OuterVolumeSpecName: "config") pod "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" (UID: "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.025377 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" (UID: "f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.041039 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.041074 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.041084 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.041094 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.041103 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsq82\" (UniqueName: \"kubernetes.io/projected/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc-kube-api-access-vsq82\") on node \"crc\" DevicePath \"\"" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.780833 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" event={"ID":"f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc","Type":"ContainerDied","Data":"9ddbca2e26bf7bcbe35bf0ca93dd244588314219c6a8eb51306dfd25cd5aa633"} Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.780908 4903 scope.go:117] "RemoveContainer" containerID="a017cb6b0c626f509ada0f5d236ebd87c2cc43bf6c3959e482b1da506fc80d64" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.782172 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4d4d8655-ngz2q" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.812616 4903 scope.go:117] "RemoveContainer" containerID="a44c804c296864054cd8db400ba626cb9766b3e10c61f4cbb5b3917e194928d9" Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.821076 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4d4d8655-ngz2q"] Jan 28 17:22:45 crc kubenswrapper[4903]: I0128 17:22:45.833328 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c4d4d8655-ngz2q"] Jan 28 17:22:46 crc kubenswrapper[4903]: I0128 17:22:46.414121 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:22:46 crc kubenswrapper[4903]: E0128 17:22:46.414442 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:22:46 crc kubenswrapper[4903]: I0128 17:22:46.427372 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" path="/var/lib/kubelet/pods/f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc/volumes" Jan 28 17:22:48 crc kubenswrapper[4903]: I0128 17:22:48.048749 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 17:22:48 crc kubenswrapper[4903]: I0128 17:22:48.049741 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 17:22:48 crc kubenswrapper[4903]: I0128 17:22:48.078920 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 17:22:48 crc kubenswrapper[4903]: I0128 17:22:48.113303 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 17:22:48 crc kubenswrapper[4903]: I0128 17:22:48.810017 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 17:22:48 crc kubenswrapper[4903]: I0128 17:22:48.810084 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 17:22:49 crc kubenswrapper[4903]: I0128 17:22:49.062071 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:49 crc kubenswrapper[4903]: I0128 17:22:49.062470 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:49 crc kubenswrapper[4903]: I0128 17:22:49.100411 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:49 crc kubenswrapper[4903]: I0128 17:22:49.121641 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:49 crc kubenswrapper[4903]: I0128 17:22:49.818668 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:49 crc kubenswrapper[4903]: I0128 17:22:49.818710 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:50 crc kubenswrapper[4903]: I0128 17:22:50.720942 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 17:22:50 crc kubenswrapper[4903]: I0128 17:22:50.759288 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 17:22:51 crc kubenswrapper[4903]: I0128 17:22:51.860989 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:51 crc kubenswrapper[4903]: I0128 17:22:51.861380 4903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 17:22:51 crc kubenswrapper[4903]: I0128 17:22:51.931855 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 17:22:54 crc kubenswrapper[4903]: I0128 17:22:54.398251 4903 scope.go:117] "RemoveContainer" containerID="c1c41d02722fd4d483222b62bbceb32a3682720522031cbfcfdef71ca6f24e58" Jan 28 17:22:57 crc kubenswrapper[4903]: I0128 17:22:57.413984 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:22:57 crc kubenswrapper[4903]: E0128 17:22:57.414718 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.789011 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-898hf"] Jan 28 17:22:59 crc kubenswrapper[4903]: E0128 17:22:59.789842 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerName="dnsmasq-dns" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.789860 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerName="dnsmasq-dns" Jan 28 17:22:59 crc kubenswrapper[4903]: E0128 17:22:59.789904 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerName="init" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.789913 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerName="init" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.790125 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c72d10-b05c-44f9-8f7b-96e5a10e4ccc" containerName="dnsmasq-dns" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.790967 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-898hf" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.800201 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-898hf"] Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.885809 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-73d1-account-create-update-8rtl2"] Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.887078 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvjl\" (UniqueName: \"kubernetes.io/projected/d54c18d4-b547-431e-9e80-d077a19f9a20-kube-api-access-7hvjl\") pod \"placement-db-create-898hf\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " pod="openstack/placement-db-create-898hf" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.887128 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c18d4-b547-431e-9e80-d077a19f9a20-operator-scripts\") pod \"placement-db-create-898hf\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " pod="openstack/placement-db-create-898hf" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.887499 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.890256 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.896981 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-73d1-account-create-update-8rtl2"] Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.990052 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-operator-scripts\") pod \"placement-73d1-account-create-update-8rtl2\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.990295 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hvjl\" (UniqueName: \"kubernetes.io/projected/d54c18d4-b547-431e-9e80-d077a19f9a20-kube-api-access-7hvjl\") pod \"placement-db-create-898hf\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " pod="openstack/placement-db-create-898hf" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.990375 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c18d4-b547-431e-9e80-d077a19f9a20-operator-scripts\") pod \"placement-db-create-898hf\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " pod="openstack/placement-db-create-898hf" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.990576 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qzd\" (UniqueName: \"kubernetes.io/projected/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-kube-api-access-z5qzd\") pod \"placement-73d1-account-create-update-8rtl2\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:22:59 crc kubenswrapper[4903]: I0128 17:22:59.991263 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c18d4-b547-431e-9e80-d077a19f9a20-operator-scripts\") pod \"placement-db-create-898hf\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " pod="openstack/placement-db-create-898hf" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.009566 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hvjl\" (UniqueName: \"kubernetes.io/projected/d54c18d4-b547-431e-9e80-d077a19f9a20-kube-api-access-7hvjl\") pod \"placement-db-create-898hf\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " pod="openstack/placement-db-create-898hf" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.092823 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-operator-scripts\") pod \"placement-73d1-account-create-update-8rtl2\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.092969 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qzd\" (UniqueName: \"kubernetes.io/projected/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-kube-api-access-z5qzd\") pod \"placement-73d1-account-create-update-8rtl2\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.093895 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-operator-scripts\") pod \"placement-73d1-account-create-update-8rtl2\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.110213 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qzd\" (UniqueName: \"kubernetes.io/projected/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-kube-api-access-z5qzd\") pod \"placement-73d1-account-create-update-8rtl2\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.112541 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-898hf" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.208697 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.565683 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-898hf"] Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.690624 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-73d1-account-create-update-8rtl2"] Jan 28 17:23:00 crc kubenswrapper[4903]: W0128 17:23:00.697019 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dfe1130_fa3c_4b3b_9da7_4e564ae28488.slice/crio-d8fc53c95c2289fa6fd333d5156c1654aaf6518e2906d50d37b91c6a1d2e84b9 WatchSource:0}: Error finding container d8fc53c95c2289fa6fd333d5156c1654aaf6518e2906d50d37b91c6a1d2e84b9: Status 404 returned error can't find the container with id d8fc53c95c2289fa6fd333d5156c1654aaf6518e2906d50d37b91c6a1d2e84b9 Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.920076 4903 generic.go:334] "Generic (PLEG): container finished" podID="d54c18d4-b547-431e-9e80-d077a19f9a20" containerID="9efa41887320f9581d3e54f25b926d7296397e61a57e173acf9fec3970722af7" exitCode=0 Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.920136 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-898hf" event={"ID":"d54c18d4-b547-431e-9e80-d077a19f9a20","Type":"ContainerDied","Data":"9efa41887320f9581d3e54f25b926d7296397e61a57e173acf9fec3970722af7"} Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.920203 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-898hf" event={"ID":"d54c18d4-b547-431e-9e80-d077a19f9a20","Type":"ContainerStarted","Data":"3d85b0b5dcaab7c557b3a9c8ac31457799dd2ca98af287bb0e1a5aaaf3820313"} Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.921979 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-73d1-account-create-update-8rtl2" event={"ID":"0dfe1130-fa3c-4b3b-9da7-4e564ae28488","Type":"ContainerStarted","Data":"3a751e8e4561fc579480f0746f38341c73e8d20b82d7d62c29f630309edc816e"} Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.922044 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-73d1-account-create-update-8rtl2" event={"ID":"0dfe1130-fa3c-4b3b-9da7-4e564ae28488","Type":"ContainerStarted","Data":"d8fc53c95c2289fa6fd333d5156c1654aaf6518e2906d50d37b91c6a1d2e84b9"} Jan 28 17:23:00 crc kubenswrapper[4903]: I0128 17:23:00.952827 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-73d1-account-create-update-8rtl2" podStartSLOduration=1.9528073030000002 podStartE2EDuration="1.952807303s" podCreationTimestamp="2026-01-28 17:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:23:00.948907048 +0000 UTC m=+5853.224878559" watchObservedRunningTime="2026-01-28 17:23:00.952807303 +0000 UTC m=+5853.228778814" Jan 28 17:23:01 crc kubenswrapper[4903]: I0128 17:23:01.933268 4903 generic.go:334] "Generic (PLEG): container finished" podID="0dfe1130-fa3c-4b3b-9da7-4e564ae28488" containerID="3a751e8e4561fc579480f0746f38341c73e8d20b82d7d62c29f630309edc816e" exitCode=0 Jan 28 17:23:01 crc kubenswrapper[4903]: I0128 17:23:01.933458 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-73d1-account-create-update-8rtl2" event={"ID":"0dfe1130-fa3c-4b3b-9da7-4e564ae28488","Type":"ContainerDied","Data":"3a751e8e4561fc579480f0746f38341c73e8d20b82d7d62c29f630309edc816e"} Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.255065 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-898hf" Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.335376 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hvjl\" (UniqueName: \"kubernetes.io/projected/d54c18d4-b547-431e-9e80-d077a19f9a20-kube-api-access-7hvjl\") pod \"d54c18d4-b547-431e-9e80-d077a19f9a20\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.335998 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c18d4-b547-431e-9e80-d077a19f9a20-operator-scripts\") pod \"d54c18d4-b547-431e-9e80-d077a19f9a20\" (UID: \"d54c18d4-b547-431e-9e80-d077a19f9a20\") " Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.336733 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54c18d4-b547-431e-9e80-d077a19f9a20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d54c18d4-b547-431e-9e80-d077a19f9a20" (UID: "d54c18d4-b547-431e-9e80-d077a19f9a20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.341707 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54c18d4-b547-431e-9e80-d077a19f9a20-kube-api-access-7hvjl" (OuterVolumeSpecName: "kube-api-access-7hvjl") pod "d54c18d4-b547-431e-9e80-d077a19f9a20" (UID: "d54c18d4-b547-431e-9e80-d077a19f9a20"). InnerVolumeSpecName "kube-api-access-7hvjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.438568 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54c18d4-b547-431e-9e80-d077a19f9a20-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.438601 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hvjl\" (UniqueName: \"kubernetes.io/projected/d54c18d4-b547-431e-9e80-d077a19f9a20-kube-api-access-7hvjl\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.944011 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-898hf" event={"ID":"d54c18d4-b547-431e-9e80-d077a19f9a20","Type":"ContainerDied","Data":"3d85b0b5dcaab7c557b3a9c8ac31457799dd2ca98af287bb0e1a5aaaf3820313"} Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.944066 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d85b0b5dcaab7c557b3a9c8ac31457799dd2ca98af287bb0e1a5aaaf3820313" Jan 28 17:23:02 crc kubenswrapper[4903]: I0128 17:23:02.944029 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-898hf" Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.270676 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.355795 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5qzd\" (UniqueName: \"kubernetes.io/projected/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-kube-api-access-z5qzd\") pod \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.356036 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-operator-scripts\") pod \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\" (UID: \"0dfe1130-fa3c-4b3b-9da7-4e564ae28488\") " Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.356981 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dfe1130-fa3c-4b3b-9da7-4e564ae28488" (UID: "0dfe1130-fa3c-4b3b-9da7-4e564ae28488"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.360126 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-kube-api-access-z5qzd" (OuterVolumeSpecName: "kube-api-access-z5qzd") pod "0dfe1130-fa3c-4b3b-9da7-4e564ae28488" (UID: "0dfe1130-fa3c-4b3b-9da7-4e564ae28488"). InnerVolumeSpecName "kube-api-access-z5qzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.457763 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.457796 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5qzd\" (UniqueName: \"kubernetes.io/projected/0dfe1130-fa3c-4b3b-9da7-4e564ae28488-kube-api-access-z5qzd\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.951085 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-73d1-account-create-update-8rtl2" event={"ID":"0dfe1130-fa3c-4b3b-9da7-4e564ae28488","Type":"ContainerDied","Data":"d8fc53c95c2289fa6fd333d5156c1654aaf6518e2906d50d37b91c6a1d2e84b9"} Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.951125 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-73d1-account-create-update-8rtl2" Jan 28 17:23:03 crc kubenswrapper[4903]: I0128 17:23:03.951137 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8fc53c95c2289fa6fd333d5156c1654aaf6518e2906d50d37b91c6a1d2e84b9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.303018 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb6d4cc67-7zkv2"] Jan 28 17:23:05 crc kubenswrapper[4903]: E0128 17:23:05.303642 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54c18d4-b547-431e-9e80-d077a19f9a20" containerName="mariadb-database-create" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.303653 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54c18d4-b547-431e-9e80-d077a19f9a20" containerName="mariadb-database-create" Jan 28 17:23:05 crc kubenswrapper[4903]: E0128 17:23:05.303679 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe1130-fa3c-4b3b-9da7-4e564ae28488" containerName="mariadb-account-create-update" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.303685 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe1130-fa3c-4b3b-9da7-4e564ae28488" containerName="mariadb-account-create-update" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.303840 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfe1130-fa3c-4b3b-9da7-4e564ae28488" containerName="mariadb-account-create-update" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.303856 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54c18d4-b547-431e-9e80-d077a19f9a20" containerName="mariadb-database-create" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.304797 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.324496 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb6d4cc67-7zkv2"] Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.348273 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-97wf9"] Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.349380 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.353138 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hrws7" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.354982 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.357883 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.377740 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-97wf9"] Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.401694 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-dns-svc\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.401775 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.401807 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.401926 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-config\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.401990 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wxhw\" (UniqueName: \"kubernetes.io/projected/28bcef49-09f5-4d52-b6d5-022be9688809-kube-api-access-6wxhw\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503195 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wxhw\" (UniqueName: \"kubernetes.io/projected/28bcef49-09f5-4d52-b6d5-022be9688809-kube-api-access-6wxhw\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503245 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-dns-svc\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503301 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503327 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503398 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-scripts\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503448 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-config-data\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503502 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47bg\" (UniqueName: \"kubernetes.io/projected/b05587f5-cb99-43be-9bdf-4c763735c0da-kube-api-access-l47bg\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503576 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-combined-ca-bundle\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503620 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-config\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.503667 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05587f5-cb99-43be-9bdf-4c763735c0da-logs\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.504387 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.504684 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.504712 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-dns-svc\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.504922 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-config\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.526261 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wxhw\" (UniqueName: \"kubernetes.io/projected/28bcef49-09f5-4d52-b6d5-022be9688809-kube-api-access-6wxhw\") pod \"dnsmasq-dns-6bb6d4cc67-7zkv2\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.605824 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-scripts\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.605904 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-config-data\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.605945 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l47bg\" (UniqueName: \"kubernetes.io/projected/b05587f5-cb99-43be-9bdf-4c763735c0da-kube-api-access-l47bg\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.605990 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-combined-ca-bundle\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.606035 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05587f5-cb99-43be-9bdf-4c763735c0da-logs\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.606624 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05587f5-cb99-43be-9bdf-4c763735c0da-logs\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.609438 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-scripts\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.610667 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-combined-ca-bundle\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.611787 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-config-data\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.624959 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47bg\" (UniqueName: \"kubernetes.io/projected/b05587f5-cb99-43be-9bdf-4c763735c0da-kube-api-access-l47bg\") pod \"placement-db-sync-97wf9\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.625568 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:05 crc kubenswrapper[4903]: I0128 17:23:05.668155 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:06 crc kubenswrapper[4903]: I0128 17:23:06.165123 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb6d4cc67-7zkv2"] Jan 28 17:23:06 crc kubenswrapper[4903]: I0128 17:23:06.292920 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-97wf9"] Jan 28 17:23:06 crc kubenswrapper[4903]: I0128 17:23:06.976440 4903 generic.go:334] "Generic (PLEG): container finished" podID="28bcef49-09f5-4d52-b6d5-022be9688809" containerID="ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15" exitCode=0 Jan 28 17:23:06 crc kubenswrapper[4903]: I0128 17:23:06.976512 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" event={"ID":"28bcef49-09f5-4d52-b6d5-022be9688809","Type":"ContainerDied","Data":"ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15"} Jan 28 17:23:06 crc kubenswrapper[4903]: I0128 17:23:06.976566 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" event={"ID":"28bcef49-09f5-4d52-b6d5-022be9688809","Type":"ContainerStarted","Data":"fa936205419dedd9b88c2c4b03211693777fa879ae55640747ccccde51caf489"} Jan 28 17:23:06 crc kubenswrapper[4903]: I0128 17:23:06.981204 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-97wf9" event={"ID":"b05587f5-cb99-43be-9bdf-4c763735c0da","Type":"ContainerStarted","Data":"01378104c04ba7f6972c481a3b5933c8b63ab01fa4b77fc64ff9e5bd9a1b3cd8"} Jan 28 17:23:06 crc kubenswrapper[4903]: I0128 17:23:06.981496 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-97wf9" event={"ID":"b05587f5-cb99-43be-9bdf-4c763735c0da","Type":"ContainerStarted","Data":"c37d5f5ab673513b4600272837778a57b616633343a9e4dd1eff98dd53081f31"} Jan 28 17:23:07 crc kubenswrapper[4903]: I0128 17:23:07.036971 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-97wf9" podStartSLOduration=2.03695109 podStartE2EDuration="2.03695109s" podCreationTimestamp="2026-01-28 17:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:23:07.029087936 +0000 UTC m=+5859.305059457" watchObservedRunningTime="2026-01-28 17:23:07.03695109 +0000 UTC m=+5859.312922601" Jan 28 17:23:07 crc kubenswrapper[4903]: I0128 17:23:07.993447 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" event={"ID":"28bcef49-09f5-4d52-b6d5-022be9688809","Type":"ContainerStarted","Data":"25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1"} Jan 28 17:23:07 crc kubenswrapper[4903]: I0128 17:23:07.993929 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:07 crc kubenswrapper[4903]: I0128 17:23:07.995565 4903 generic.go:334] "Generic (PLEG): container finished" podID="b05587f5-cb99-43be-9bdf-4c763735c0da" containerID="01378104c04ba7f6972c481a3b5933c8b63ab01fa4b77fc64ff9e5bd9a1b3cd8" exitCode=0 Jan 28 17:23:07 crc kubenswrapper[4903]: I0128 17:23:07.995597 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-97wf9" event={"ID":"b05587f5-cb99-43be-9bdf-4c763735c0da","Type":"ContainerDied","Data":"01378104c04ba7f6972c481a3b5933c8b63ab01fa4b77fc64ff9e5bd9a1b3cd8"} Jan 28 17:23:08 crc kubenswrapper[4903]: I0128 17:23:08.009937 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" podStartSLOduration=3.009920172 podStartE2EDuration="3.009920172s" podCreationTimestamp="2026-01-28 17:23:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:23:08.008707039 +0000 UTC m=+5860.284678540" watchObservedRunningTime="2026-01-28 17:23:08.009920172 +0000 UTC m=+5860.285891683" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.407283 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.590389 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l47bg\" (UniqueName: \"kubernetes.io/projected/b05587f5-cb99-43be-9bdf-4c763735c0da-kube-api-access-l47bg\") pod \"b05587f5-cb99-43be-9bdf-4c763735c0da\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.590494 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-scripts\") pod \"b05587f5-cb99-43be-9bdf-4c763735c0da\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.590560 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-combined-ca-bundle\") pod \"b05587f5-cb99-43be-9bdf-4c763735c0da\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.590613 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-config-data\") pod \"b05587f5-cb99-43be-9bdf-4c763735c0da\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.590665 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05587f5-cb99-43be-9bdf-4c763735c0da-logs\") pod \"b05587f5-cb99-43be-9bdf-4c763735c0da\" (UID: \"b05587f5-cb99-43be-9bdf-4c763735c0da\") " Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.591161 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05587f5-cb99-43be-9bdf-4c763735c0da-logs" (OuterVolumeSpecName: "logs") pod "b05587f5-cb99-43be-9bdf-4c763735c0da" (UID: "b05587f5-cb99-43be-9bdf-4c763735c0da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.595885 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-scripts" (OuterVolumeSpecName: "scripts") pod "b05587f5-cb99-43be-9bdf-4c763735c0da" (UID: "b05587f5-cb99-43be-9bdf-4c763735c0da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.599428 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05587f5-cb99-43be-9bdf-4c763735c0da-kube-api-access-l47bg" (OuterVolumeSpecName: "kube-api-access-l47bg") pod "b05587f5-cb99-43be-9bdf-4c763735c0da" (UID: "b05587f5-cb99-43be-9bdf-4c763735c0da"). InnerVolumeSpecName "kube-api-access-l47bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.615236 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b05587f5-cb99-43be-9bdf-4c763735c0da" (UID: "b05587f5-cb99-43be-9bdf-4c763735c0da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.616260 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-config-data" (OuterVolumeSpecName: "config-data") pod "b05587f5-cb99-43be-9bdf-4c763735c0da" (UID: "b05587f5-cb99-43be-9bdf-4c763735c0da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.692305 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l47bg\" (UniqueName: \"kubernetes.io/projected/b05587f5-cb99-43be-9bdf-4c763735c0da-kube-api-access-l47bg\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.692461 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.692565 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.692627 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05587f5-cb99-43be-9bdf-4c763735c0da-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:09 crc kubenswrapper[4903]: I0128 17:23:09.692685 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05587f5-cb99-43be-9bdf-4c763735c0da-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.014118 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-97wf9" event={"ID":"b05587f5-cb99-43be-9bdf-4c763735c0da","Type":"ContainerDied","Data":"c37d5f5ab673513b4600272837778a57b616633343a9e4dd1eff98dd53081f31"} Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.014340 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c37d5f5ab673513b4600272837778a57b616633343a9e4dd1eff98dd53081f31" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.014347 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-97wf9" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.516651 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c88447cc4-djzzw"] Jan 28 17:23:10 crc kubenswrapper[4903]: E0128 17:23:10.517314 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05587f5-cb99-43be-9bdf-4c763735c0da" containerName="placement-db-sync" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.517327 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05587f5-cb99-43be-9bdf-4c763735c0da" containerName="placement-db-sync" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.517482 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05587f5-cb99-43be-9bdf-4c763735c0da" containerName="placement-db-sync" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.518357 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.520088 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.520284 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.522158 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.522294 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hrws7" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.522827 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.556205 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c88447cc4-djzzw"] Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.609621 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-combined-ca-bundle\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.609734 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-logs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.609759 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-public-tls-certs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.609852 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-internal-tls-certs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.609989 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jcs4\" (UniqueName: \"kubernetes.io/projected/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-kube-api-access-9jcs4\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.610022 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-config-data\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.610081 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-scripts\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.711922 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-logs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.711971 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-public-tls-certs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.712001 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-internal-tls-certs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.712050 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jcs4\" (UniqueName: \"kubernetes.io/projected/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-kube-api-access-9jcs4\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.712068 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-config-data\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.712146 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-scripts\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.712229 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-combined-ca-bundle\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.713118 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-logs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.716618 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-public-tls-certs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.716636 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-config-data\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.717573 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-combined-ca-bundle\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.717717 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-scripts\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.725161 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-internal-tls-certs\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.728302 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jcs4\" (UniqueName: \"kubernetes.io/projected/ec3b34a5-d03e-49fd-94ce-d9984704d2ab-kube-api-access-9jcs4\") pod \"placement-6c88447cc4-djzzw\" (UID: \"ec3b34a5-d03e-49fd-94ce-d9984704d2ab\") " pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:10 crc kubenswrapper[4903]: I0128 17:23:10.835134 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:11 crc kubenswrapper[4903]: W0128 17:23:11.275457 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec3b34a5_d03e_49fd_94ce_d9984704d2ab.slice/crio-9a2e594311a4f4fe974c62c154614815370c44f59c1f2511227a0156c7556878 WatchSource:0}: Error finding container 9a2e594311a4f4fe974c62c154614815370c44f59c1f2511227a0156c7556878: Status 404 returned error can't find the container with id 9a2e594311a4f4fe974c62c154614815370c44f59c1f2511227a0156c7556878 Jan 28 17:23:11 crc kubenswrapper[4903]: I0128 17:23:11.276804 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c88447cc4-djzzw"] Jan 28 17:23:12 crc kubenswrapper[4903]: I0128 17:23:12.038747 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c88447cc4-djzzw" event={"ID":"ec3b34a5-d03e-49fd-94ce-d9984704d2ab","Type":"ContainerStarted","Data":"e4855aa3c015d4ebf6c4006cd18847491032fa2f76ecf8baa7faa241805ff4ab"} Jan 28 17:23:12 crc kubenswrapper[4903]: I0128 17:23:12.039070 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c88447cc4-djzzw" event={"ID":"ec3b34a5-d03e-49fd-94ce-d9984704d2ab","Type":"ContainerStarted","Data":"e31282cdd4253408826edad8555f6c200f3bb9aa424278ec64437bc01cbdd217"} Jan 28 17:23:12 crc kubenswrapper[4903]: I0128 17:23:12.039081 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c88447cc4-djzzw" event={"ID":"ec3b34a5-d03e-49fd-94ce-d9984704d2ab","Type":"ContainerStarted","Data":"9a2e594311a4f4fe974c62c154614815370c44f59c1f2511227a0156c7556878"} Jan 28 17:23:12 crc kubenswrapper[4903]: I0128 17:23:12.039912 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:12 crc kubenswrapper[4903]: I0128 17:23:12.039931 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:12 crc kubenswrapper[4903]: I0128 17:23:12.057011 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c88447cc4-djzzw" podStartSLOduration=2.05698657 podStartE2EDuration="2.05698657s" podCreationTimestamp="2026-01-28 17:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:23:12.055454999 +0000 UTC m=+5864.331426510" watchObservedRunningTime="2026-01-28 17:23:12.05698657 +0000 UTC m=+5864.332958081" Jan 28 17:23:12 crc kubenswrapper[4903]: I0128 17:23:12.414049 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:23:12 crc kubenswrapper[4903]: E0128 17:23:12.414757 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:23:15 crc kubenswrapper[4903]: I0128 17:23:15.627635 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:23:15 crc kubenswrapper[4903]: I0128 17:23:15.692774 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-665b6fb647-htl8h"] Jan 28 17:23:15 crc kubenswrapper[4903]: I0128 17:23:15.693008 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" podUID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerName="dnsmasq-dns" containerID="cri-o://81be15fdd8343214a566a7022156eaf6e27036b3320ea1267e253277beb74449" gracePeriod=10 Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.083610 4903 generic.go:334] "Generic (PLEG): container finished" podID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerID="81be15fdd8343214a566a7022156eaf6e27036b3320ea1267e253277beb74449" exitCode=0 Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.083718 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" event={"ID":"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e","Type":"ContainerDied","Data":"81be15fdd8343214a566a7022156eaf6e27036b3320ea1267e253277beb74449"} Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.083940 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" event={"ID":"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e","Type":"ContainerDied","Data":"6f9d0e3507111bd073123ae4e1f81ce5961574d4d98be4be0b007aa949cad3c8"} Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.083954 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9d0e3507111bd073123ae4e1f81ce5961574d4d98be4be0b007aa949cad3c8" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.133056 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.324740 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-config\") pod \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.324988 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-dns-svc\") pod \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.325054 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-nb\") pod \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.325091 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2glf4\" (UniqueName: \"kubernetes.io/projected/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-kube-api-access-2glf4\") pod \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.325719 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-sb\") pod \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\" (UID: \"5bfb9181-8148-4a8b-ae4b-7465b68b3c9e\") " Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.330308 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-kube-api-access-2glf4" (OuterVolumeSpecName: "kube-api-access-2glf4") pod "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" (UID: "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e"). InnerVolumeSpecName "kube-api-access-2glf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.374662 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" (UID: "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.374861 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-config" (OuterVolumeSpecName: "config") pod "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" (UID: "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.376828 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" (UID: "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.395799 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" (UID: "5bfb9181-8148-4a8b-ae4b-7465b68b3c9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.427873 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.427904 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2glf4\" (UniqueName: \"kubernetes.io/projected/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-kube-api-access-2glf4\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.427916 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.427925 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:16 crc kubenswrapper[4903]: I0128 17:23:16.427934 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:23:17 crc kubenswrapper[4903]: I0128 17:23:17.096581 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665b6fb647-htl8h" Jan 28 17:23:17 crc kubenswrapper[4903]: I0128 17:23:17.121973 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-665b6fb647-htl8h"] Jan 28 17:23:17 crc kubenswrapper[4903]: I0128 17:23:17.144575 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-665b6fb647-htl8h"] Jan 28 17:23:18 crc kubenswrapper[4903]: I0128 17:23:18.424628 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" path="/var/lib/kubelet/pods/5bfb9181-8148-4a8b-ae4b-7465b68b3c9e/volumes" Jan 28 17:23:26 crc kubenswrapper[4903]: I0128 17:23:26.413375 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:23:26 crc kubenswrapper[4903]: E0128 17:23:26.414486 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:23:38 crc kubenswrapper[4903]: I0128 17:23:38.420172 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:23:38 crc kubenswrapper[4903]: E0128 17:23:38.421072 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:23:41 crc kubenswrapper[4903]: I0128 17:23:41.869803 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:41 crc kubenswrapper[4903]: I0128 17:23:41.873119 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c88447cc4-djzzw" Jan 28 17:23:49 crc kubenswrapper[4903]: I0128 17:23:49.413511 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:23:49 crc kubenswrapper[4903]: E0128 17:23:49.414293 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:24:04 crc kubenswrapper[4903]: I0128 17:24:04.414751 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:24:04 crc kubenswrapper[4903]: E0128 17:24:04.415728 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.444259 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kvc96"] Jan 28 17:24:05 crc kubenswrapper[4903]: E0128 17:24:05.444938 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerName="dnsmasq-dns" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.444954 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerName="dnsmasq-dns" Jan 28 17:24:05 crc kubenswrapper[4903]: E0128 17:24:05.444981 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerName="init" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.444989 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerName="init" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.445198 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfb9181-8148-4a8b-ae4b-7465b68b3c9e" containerName="dnsmasq-dns" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.445924 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.461276 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kvc96"] Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.547827 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kcm7v"] Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.549159 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.550363 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d122611d-0720-468d-8841-174e00f898fe-operator-scripts\") pod \"nova-api-db-create-kvc96\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.550433 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcxht\" (UniqueName: \"kubernetes.io/projected/d122611d-0720-468d-8841-174e00f898fe-kube-api-access-fcxht\") pod \"nova-api-db-create-kvc96\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.558292 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kcm7v"] Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.652091 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6mj\" (UniqueName: \"kubernetes.io/projected/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-kube-api-access-ft6mj\") pod \"nova-cell0-db-create-kcm7v\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.652173 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d122611d-0720-468d-8841-174e00f898fe-operator-scripts\") pod \"nova-api-db-create-kvc96\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.652319 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-operator-scripts\") pod \"nova-cell0-db-create-kcm7v\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.652406 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcxht\" (UniqueName: \"kubernetes.io/projected/d122611d-0720-468d-8841-174e00f898fe-kube-api-access-fcxht\") pod \"nova-api-db-create-kvc96\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.652967 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d122611d-0720-468d-8841-174e00f898fe-operator-scripts\") pod \"nova-api-db-create-kvc96\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.655258 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b894-account-create-update-dkrqc"] Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.656551 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.664184 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b894-account-create-update-dkrqc"] Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.664991 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.678053 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcxht\" (UniqueName: \"kubernetes.io/projected/d122611d-0720-468d-8841-174e00f898fe-kube-api-access-fcxht\") pod \"nova-api-db-create-kvc96\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.751711 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-47f4p"] Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.752955 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.754030 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6mj\" (UniqueName: \"kubernetes.io/projected/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-kube-api-access-ft6mj\") pod \"nova-cell0-db-create-kcm7v\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.754102 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258fac9e-ef70-4e82-8767-1858cf6272b6-operator-scripts\") pod \"nova-api-b894-account-create-update-dkrqc\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.754154 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-operator-scripts\") pod \"nova-cell0-db-create-kcm7v\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.754227 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2k2z\" (UniqueName: \"kubernetes.io/projected/258fac9e-ef70-4e82-8767-1858cf6272b6-kube-api-access-k2k2z\") pod \"nova-api-b894-account-create-update-dkrqc\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.754863 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-operator-scripts\") pod \"nova-cell0-db-create-kcm7v\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.763683 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-47f4p"] Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.769355 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:05 crc kubenswrapper[4903]: I0128 17:24:05.787574 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6mj\" (UniqueName: \"kubernetes.io/projected/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-kube-api-access-ft6mj\") pod \"nova-cell0-db-create-kcm7v\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.857682 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46797207-aaf7-442a-a249-caa3998a37cb-operator-scripts\") pod \"nova-cell1-db-create-47f4p\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.857838 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258fac9e-ef70-4e82-8767-1858cf6272b6-operator-scripts\") pod \"nova-api-b894-account-create-update-dkrqc\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.857930 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2k2z\" (UniqueName: \"kubernetes.io/projected/258fac9e-ef70-4e82-8767-1858cf6272b6-kube-api-access-k2k2z\") pod \"nova-api-b894-account-create-update-dkrqc\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.858023 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbrc\" (UniqueName: \"kubernetes.io/projected/46797207-aaf7-442a-a249-caa3998a37cb-kube-api-access-dzbrc\") pod \"nova-cell1-db-create-47f4p\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.859057 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258fac9e-ef70-4e82-8767-1858cf6272b6-operator-scripts\") pod \"nova-api-b894-account-create-update-dkrqc\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.868920 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.872645 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8855-account-create-update-jv4rw"] Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.874009 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.877140 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.883673 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8855-account-create-update-jv4rw"] Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.884057 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2k2z\" (UniqueName: \"kubernetes.io/projected/258fac9e-ef70-4e82-8767-1858cf6272b6-kube-api-access-k2k2z\") pod \"nova-api-b894-account-create-update-dkrqc\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.959887 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzbrc\" (UniqueName: \"kubernetes.io/projected/46797207-aaf7-442a-a249-caa3998a37cb-kube-api-access-dzbrc\") pod \"nova-cell1-db-create-47f4p\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.959971 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46797207-aaf7-442a-a249-caa3998a37cb-operator-scripts\") pod \"nova-cell1-db-create-47f4p\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.960059 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4f78\" (UniqueName: \"kubernetes.io/projected/988c20f0-d6bf-4819-b2ba-4323f7a428af-kube-api-access-r4f78\") pod \"nova-cell0-8855-account-create-update-jv4rw\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.960098 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988c20f0-d6bf-4819-b2ba-4323f7a428af-operator-scripts\") pod \"nova-cell0-8855-account-create-update-jv4rw\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.961436 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46797207-aaf7-442a-a249-caa3998a37cb-operator-scripts\") pod \"nova-cell1-db-create-47f4p\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.977628 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:05.979091 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzbrc\" (UniqueName: \"kubernetes.io/projected/46797207-aaf7-442a-a249-caa3998a37cb-kube-api-access-dzbrc\") pod \"nova-cell1-db-create-47f4p\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.061818 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988c20f0-d6bf-4819-b2ba-4323f7a428af-operator-scripts\") pod \"nova-cell0-8855-account-create-update-jv4rw\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.062352 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4f78\" (UniqueName: \"kubernetes.io/projected/988c20f0-d6bf-4819-b2ba-4323f7a428af-kube-api-access-r4f78\") pod \"nova-cell0-8855-account-create-update-jv4rw\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.062779 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988c20f0-d6bf-4819-b2ba-4323f7a428af-operator-scripts\") pod \"nova-cell0-8855-account-create-update-jv4rw\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.066105 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-02f9-account-create-update-g5jhl"] Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.067394 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.072066 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.072472 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.074752 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-02f9-account-create-update-g5jhl"] Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.079010 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4f78\" (UniqueName: \"kubernetes.io/projected/988c20f0-d6bf-4819-b2ba-4323f7a428af-kube-api-access-r4f78\") pod \"nova-cell0-8855-account-create-update-jv4rw\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.164051 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10367b1a-b989-4e5b-b159-de422134c172-operator-scripts\") pod \"nova-cell1-02f9-account-create-update-g5jhl\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.164118 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbqq\" (UniqueName: \"kubernetes.io/projected/10367b1a-b989-4e5b-b159-de422134c172-kube-api-access-8qbqq\") pod \"nova-cell1-02f9-account-create-update-g5jhl\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.266062 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10367b1a-b989-4e5b-b159-de422134c172-operator-scripts\") pod \"nova-cell1-02f9-account-create-update-g5jhl\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.266139 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbqq\" (UniqueName: \"kubernetes.io/projected/10367b1a-b989-4e5b-b159-de422134c172-kube-api-access-8qbqq\") pod \"nova-cell1-02f9-account-create-update-g5jhl\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.266785 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10367b1a-b989-4e5b-b159-de422134c172-operator-scripts\") pod \"nova-cell1-02f9-account-create-update-g5jhl\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.274430 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.283934 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbqq\" (UniqueName: \"kubernetes.io/projected/10367b1a-b989-4e5b-b159-de422134c172-kube-api-access-8qbqq\") pod \"nova-cell1-02f9-account-create-update-g5jhl\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:06 crc kubenswrapper[4903]: I0128 17:24:06.389939 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.104245 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kvc96"] Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.114069 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-47f4p"] Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.137041 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8855-account-create-update-jv4rw"] Jan 28 17:24:07 crc kubenswrapper[4903]: W0128 17:24:07.141626 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod988c20f0_d6bf_4819_b2ba_4323f7a428af.slice/crio-b6294d7cc7158afdbe2980c439c8ceabc8217affebd3c74878606f97a531ba63 WatchSource:0}: Error finding container b6294d7cc7158afdbe2980c439c8ceabc8217affebd3c74878606f97a531ba63: Status 404 returned error can't find the container with id b6294d7cc7158afdbe2980c439c8ceabc8217affebd3c74878606f97a531ba63 Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.156318 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b894-account-create-update-dkrqc"] Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.170734 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kcm7v"] Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.182333 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-02f9-account-create-update-g5jhl"] Jan 28 17:24:07 crc kubenswrapper[4903]: W0128 17:24:07.205046 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10367b1a_b989_4e5b_b159_de422134c172.slice/crio-fd615e07d9e63597e54e510d439667011b16bafbb750489243e84bfad7a5a202 WatchSource:0}: Error finding container fd615e07d9e63597e54e510d439667011b16bafbb750489243e84bfad7a5a202: Status 404 returned error can't find the container with id fd615e07d9e63597e54e510d439667011b16bafbb750489243e84bfad7a5a202 Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.537048 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b894-account-create-update-dkrqc" event={"ID":"258fac9e-ef70-4e82-8767-1858cf6272b6","Type":"ContainerStarted","Data":"f5fca149b88cad5efb4fb921475d216877821372f8b8d6d8b242dda7665676dd"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.537360 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b894-account-create-update-dkrqc" event={"ID":"258fac9e-ef70-4e82-8767-1858cf6272b6","Type":"ContainerStarted","Data":"b725365c012c57ed1bc6ded1c2f6a0a52896471e7890ce3250da8585bf3254ed"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.541111 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" event={"ID":"10367b1a-b989-4e5b-b159-de422134c172","Type":"ContainerStarted","Data":"471e950b1c26f4c4c2ab6ec600d3214871c349092283656ef6d270179925205e"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.541171 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" event={"ID":"10367b1a-b989-4e5b-b159-de422134c172","Type":"ContainerStarted","Data":"fd615e07d9e63597e54e510d439667011b16bafbb750489243e84bfad7a5a202"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.543844 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" event={"ID":"988c20f0-d6bf-4819-b2ba-4323f7a428af","Type":"ContainerStarted","Data":"2e28ab3bff497c7adc168195dbe8594d9a9eb099bddfe5c07fd213901abee703"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.543891 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" event={"ID":"988c20f0-d6bf-4819-b2ba-4323f7a428af","Type":"ContainerStarted","Data":"b6294d7cc7158afdbe2980c439c8ceabc8217affebd3c74878606f97a531ba63"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.550116 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-47f4p" event={"ID":"46797207-aaf7-442a-a249-caa3998a37cb","Type":"ContainerStarted","Data":"2c3e434e6d47b47048281d066137238b3a673f0468580e514847931a66c0a462"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.550177 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-47f4p" event={"ID":"46797207-aaf7-442a-a249-caa3998a37cb","Type":"ContainerStarted","Data":"15cb832f40c404be15f29df01c98c8ad355347dba3747e462c1e339d528cff6c"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.553854 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kcm7v" event={"ID":"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b","Type":"ContainerStarted","Data":"667caeb9cc9d975db0a662fabb5aa85b793c84a917c028cd94b04ab9b63a8b28"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.553914 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kcm7v" event={"ID":"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b","Type":"ContainerStarted","Data":"c4086888716c57970d0891e5ba32954fd0341763ec9eb229a3656366dfab3cd9"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.561437 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b894-account-create-update-dkrqc" podStartSLOduration=2.560508488 podStartE2EDuration="2.560508488s" podCreationTimestamp="2026-01-28 17:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:07.558265798 +0000 UTC m=+5919.834237309" watchObservedRunningTime="2026-01-28 17:24:07.560508488 +0000 UTC m=+5919.836479999" Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.567395 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kvc96" event={"ID":"d122611d-0720-468d-8841-174e00f898fe","Type":"ContainerStarted","Data":"b1e4c7a0cd5eb3039528dc7bbab148af038399c68a6a7d965f317b1a9e4e7a9b"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.567483 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kvc96" event={"ID":"d122611d-0720-468d-8841-174e00f898fe","Type":"ContainerStarted","Data":"83f7a9039f3bfb7446dfc41ca6b4c75696117aea5ed01d9dc6332473a1172cf0"} Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.581951 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-47f4p" podStartSLOduration=2.581915819 podStartE2EDuration="2.581915819s" podCreationTimestamp="2026-01-28 17:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:07.577915211 +0000 UTC m=+5919.853886732" watchObservedRunningTime="2026-01-28 17:24:07.581915819 +0000 UTC m=+5919.857887330" Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.611645 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-kcm7v" podStartSLOduration=2.611602085 podStartE2EDuration="2.611602085s" podCreationTimestamp="2026-01-28 17:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:07.599696161 +0000 UTC m=+5919.875667682" watchObservedRunningTime="2026-01-28 17:24:07.611602085 +0000 UTC m=+5919.887573616" Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.624520 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" podStartSLOduration=1.624497614 podStartE2EDuration="1.624497614s" podCreationTimestamp="2026-01-28 17:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:07.616077656 +0000 UTC m=+5919.892049167" watchObservedRunningTime="2026-01-28 17:24:07.624497614 +0000 UTC m=+5919.900469125" Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.637588 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" podStartSLOduration=2.637564718 podStartE2EDuration="2.637564718s" podCreationTimestamp="2026-01-28 17:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:07.632605424 +0000 UTC m=+5919.908576955" watchObservedRunningTime="2026-01-28 17:24:07.637564718 +0000 UTC m=+5919.913536229" Jan 28 17:24:07 crc kubenswrapper[4903]: I0128 17:24:07.658213 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-kvc96" podStartSLOduration=2.658196508 podStartE2EDuration="2.658196508s" podCreationTimestamp="2026-01-28 17:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:07.651145357 +0000 UTC m=+5919.927116878" watchObservedRunningTime="2026-01-28 17:24:07.658196508 +0000 UTC m=+5919.934168009" Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.578253 4903 generic.go:334] "Generic (PLEG): container finished" podID="258fac9e-ef70-4e82-8767-1858cf6272b6" containerID="f5fca149b88cad5efb4fb921475d216877821372f8b8d6d8b242dda7665676dd" exitCode=0 Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.578332 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b894-account-create-update-dkrqc" event={"ID":"258fac9e-ef70-4e82-8767-1858cf6272b6","Type":"ContainerDied","Data":"f5fca149b88cad5efb4fb921475d216877821372f8b8d6d8b242dda7665676dd"} Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.581443 4903 generic.go:334] "Generic (PLEG): container finished" podID="10367b1a-b989-4e5b-b159-de422134c172" containerID="471e950b1c26f4c4c2ab6ec600d3214871c349092283656ef6d270179925205e" exitCode=0 Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.581513 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" event={"ID":"10367b1a-b989-4e5b-b159-de422134c172","Type":"ContainerDied","Data":"471e950b1c26f4c4c2ab6ec600d3214871c349092283656ef6d270179925205e"} Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.583301 4903 generic.go:334] "Generic (PLEG): container finished" podID="988c20f0-d6bf-4819-b2ba-4323f7a428af" containerID="2e28ab3bff497c7adc168195dbe8594d9a9eb099bddfe5c07fd213901abee703" exitCode=0 Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.583351 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" event={"ID":"988c20f0-d6bf-4819-b2ba-4323f7a428af","Type":"ContainerDied","Data":"2e28ab3bff497c7adc168195dbe8594d9a9eb099bddfe5c07fd213901abee703"} Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.584935 4903 generic.go:334] "Generic (PLEG): container finished" podID="46797207-aaf7-442a-a249-caa3998a37cb" containerID="2c3e434e6d47b47048281d066137238b3a673f0468580e514847931a66c0a462" exitCode=0 Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.585028 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-47f4p" event={"ID":"46797207-aaf7-442a-a249-caa3998a37cb","Type":"ContainerDied","Data":"2c3e434e6d47b47048281d066137238b3a673f0468580e514847931a66c0a462"} Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.587445 4903 generic.go:334] "Generic (PLEG): container finished" podID="7ac2962a-7b79-419a-a524-5d2b6b3d3a8b" containerID="667caeb9cc9d975db0a662fabb5aa85b793c84a917c028cd94b04ab9b63a8b28" exitCode=0 Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.587572 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kcm7v" event={"ID":"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b","Type":"ContainerDied","Data":"667caeb9cc9d975db0a662fabb5aa85b793c84a917c028cd94b04ab9b63a8b28"} Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.589931 4903 generic.go:334] "Generic (PLEG): container finished" podID="d122611d-0720-468d-8841-174e00f898fe" containerID="b1e4c7a0cd5eb3039528dc7bbab148af038399c68a6a7d965f317b1a9e4e7a9b" exitCode=0 Jan 28 17:24:08 crc kubenswrapper[4903]: I0128 17:24:08.589994 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kvc96" event={"ID":"d122611d-0720-468d-8841-174e00f898fe","Type":"ContainerDied","Data":"b1e4c7a0cd5eb3039528dc7bbab148af038399c68a6a7d965f317b1a9e4e7a9b"} Jan 28 17:24:09 crc kubenswrapper[4903]: I0128 17:24:09.972286 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.043341 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcxht\" (UniqueName: \"kubernetes.io/projected/d122611d-0720-468d-8841-174e00f898fe-kube-api-access-fcxht\") pod \"d122611d-0720-468d-8841-174e00f898fe\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.043879 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d122611d-0720-468d-8841-174e00f898fe-operator-scripts\") pod \"d122611d-0720-468d-8841-174e00f898fe\" (UID: \"d122611d-0720-468d-8841-174e00f898fe\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.045187 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d122611d-0720-468d-8841-174e00f898fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d122611d-0720-468d-8841-174e00f898fe" (UID: "d122611d-0720-468d-8841-174e00f898fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.050582 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d122611d-0720-468d-8841-174e00f898fe-kube-api-access-fcxht" (OuterVolumeSpecName: "kube-api-access-fcxht") pod "d122611d-0720-468d-8841-174e00f898fe" (UID: "d122611d-0720-468d-8841-174e00f898fe"). InnerVolumeSpecName "kube-api-access-fcxht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.147523 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcxht\" (UniqueName: \"kubernetes.io/projected/d122611d-0720-468d-8841-174e00f898fe-kube-api-access-fcxht\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.147616 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d122611d-0720-468d-8841-174e00f898fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.151653 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.160169 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.176236 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.192358 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.202782 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249095 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qbqq\" (UniqueName: \"kubernetes.io/projected/10367b1a-b989-4e5b-b159-de422134c172-kube-api-access-8qbqq\") pod \"10367b1a-b989-4e5b-b159-de422134c172\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249208 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2k2z\" (UniqueName: \"kubernetes.io/projected/258fac9e-ef70-4e82-8767-1858cf6272b6-kube-api-access-k2k2z\") pod \"258fac9e-ef70-4e82-8767-1858cf6272b6\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249233 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzbrc\" (UniqueName: \"kubernetes.io/projected/46797207-aaf7-442a-a249-caa3998a37cb-kube-api-access-dzbrc\") pod \"46797207-aaf7-442a-a249-caa3998a37cb\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249341 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft6mj\" (UniqueName: \"kubernetes.io/projected/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-kube-api-access-ft6mj\") pod \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249376 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10367b1a-b989-4e5b-b159-de422134c172-operator-scripts\") pod \"10367b1a-b989-4e5b-b159-de422134c172\" (UID: \"10367b1a-b989-4e5b-b159-de422134c172\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249402 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988c20f0-d6bf-4819-b2ba-4323f7a428af-operator-scripts\") pod \"988c20f0-d6bf-4819-b2ba-4323f7a428af\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249451 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258fac9e-ef70-4e82-8767-1858cf6272b6-operator-scripts\") pod \"258fac9e-ef70-4e82-8767-1858cf6272b6\" (UID: \"258fac9e-ef70-4e82-8767-1858cf6272b6\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249491 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46797207-aaf7-442a-a249-caa3998a37cb-operator-scripts\") pod \"46797207-aaf7-442a-a249-caa3998a37cb\" (UID: \"46797207-aaf7-442a-a249-caa3998a37cb\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249569 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-operator-scripts\") pod \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\" (UID: \"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.249621 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4f78\" (UniqueName: \"kubernetes.io/projected/988c20f0-d6bf-4819-b2ba-4323f7a428af-kube-api-access-r4f78\") pod \"988c20f0-d6bf-4819-b2ba-4323f7a428af\" (UID: \"988c20f0-d6bf-4819-b2ba-4323f7a428af\") " Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.250965 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10367b1a-b989-4e5b-b159-de422134c172-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10367b1a-b989-4e5b-b159-de422134c172" (UID: "10367b1a-b989-4e5b-b159-de422134c172"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.251043 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46797207-aaf7-442a-a249-caa3998a37cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46797207-aaf7-442a-a249-caa3998a37cb" (UID: "46797207-aaf7-442a-a249-caa3998a37cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.251321 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ac2962a-7b79-419a-a524-5d2b6b3d3a8b" (UID: "7ac2962a-7b79-419a-a524-5d2b6b3d3a8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.251576 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/258fac9e-ef70-4e82-8767-1858cf6272b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "258fac9e-ef70-4e82-8767-1858cf6272b6" (UID: "258fac9e-ef70-4e82-8767-1858cf6272b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.251628 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988c20f0-d6bf-4819-b2ba-4323f7a428af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "988c20f0-d6bf-4819-b2ba-4323f7a428af" (UID: "988c20f0-d6bf-4819-b2ba-4323f7a428af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.253007 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988c20f0-d6bf-4819-b2ba-4323f7a428af-kube-api-access-r4f78" (OuterVolumeSpecName: "kube-api-access-r4f78") pod "988c20f0-d6bf-4819-b2ba-4323f7a428af" (UID: "988c20f0-d6bf-4819-b2ba-4323f7a428af"). InnerVolumeSpecName "kube-api-access-r4f78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.258002 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46797207-aaf7-442a-a249-caa3998a37cb-kube-api-access-dzbrc" (OuterVolumeSpecName: "kube-api-access-dzbrc") pod "46797207-aaf7-442a-a249-caa3998a37cb" (UID: "46797207-aaf7-442a-a249-caa3998a37cb"). InnerVolumeSpecName "kube-api-access-dzbrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.258054 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-kube-api-access-ft6mj" (OuterVolumeSpecName: "kube-api-access-ft6mj") pod "7ac2962a-7b79-419a-a524-5d2b6b3d3a8b" (UID: "7ac2962a-7b79-419a-a524-5d2b6b3d3a8b"). InnerVolumeSpecName "kube-api-access-ft6mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.258089 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10367b1a-b989-4e5b-b159-de422134c172-kube-api-access-8qbqq" (OuterVolumeSpecName: "kube-api-access-8qbqq") pod "10367b1a-b989-4e5b-b159-de422134c172" (UID: "10367b1a-b989-4e5b-b159-de422134c172"). InnerVolumeSpecName "kube-api-access-8qbqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.258147 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258fac9e-ef70-4e82-8767-1858cf6272b6-kube-api-access-k2k2z" (OuterVolumeSpecName: "kube-api-access-k2k2z") pod "258fac9e-ef70-4e82-8767-1858cf6272b6" (UID: "258fac9e-ef70-4e82-8767-1858cf6272b6"). InnerVolumeSpecName "kube-api-access-k2k2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.351914 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft6mj\" (UniqueName: \"kubernetes.io/projected/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-kube-api-access-ft6mj\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.351977 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10367b1a-b989-4e5b-b159-de422134c172-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.351987 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988c20f0-d6bf-4819-b2ba-4323f7a428af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.351996 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258fac9e-ef70-4e82-8767-1858cf6272b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.352005 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46797207-aaf7-442a-a249-caa3998a37cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.352016 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.352026 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4f78\" (UniqueName: \"kubernetes.io/projected/988c20f0-d6bf-4819-b2ba-4323f7a428af-kube-api-access-r4f78\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.352060 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qbqq\" (UniqueName: \"kubernetes.io/projected/10367b1a-b989-4e5b-b159-de422134c172-kube-api-access-8qbqq\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.352069 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2k2z\" (UniqueName: \"kubernetes.io/projected/258fac9e-ef70-4e82-8767-1858cf6272b6-kube-api-access-k2k2z\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.352079 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzbrc\" (UniqueName: \"kubernetes.io/projected/46797207-aaf7-442a-a249-caa3998a37cb-kube-api-access-dzbrc\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.606985 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kcm7v" event={"ID":"7ac2962a-7b79-419a-a524-5d2b6b3d3a8b","Type":"ContainerDied","Data":"c4086888716c57970d0891e5ba32954fd0341763ec9eb229a3656366dfab3cd9"} Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.607040 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4086888716c57970d0891e5ba32954fd0341763ec9eb229a3656366dfab3cd9" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.607006 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kcm7v" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.608477 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kvc96" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.608475 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kvc96" event={"ID":"d122611d-0720-468d-8841-174e00f898fe","Type":"ContainerDied","Data":"83f7a9039f3bfb7446dfc41ca6b4c75696117aea5ed01d9dc6332473a1172cf0"} Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.608569 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f7a9039f3bfb7446dfc41ca6b4c75696117aea5ed01d9dc6332473a1172cf0" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.610385 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b894-account-create-update-dkrqc" event={"ID":"258fac9e-ef70-4e82-8767-1858cf6272b6","Type":"ContainerDied","Data":"b725365c012c57ed1bc6ded1c2f6a0a52896471e7890ce3250da8585bf3254ed"} Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.610418 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b725365c012c57ed1bc6ded1c2f6a0a52896471e7890ce3250da8585bf3254ed" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.610508 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b894-account-create-update-dkrqc" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.612558 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" event={"ID":"10367b1a-b989-4e5b-b159-de422134c172","Type":"ContainerDied","Data":"fd615e07d9e63597e54e510d439667011b16bafbb750489243e84bfad7a5a202"} Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.612592 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd615e07d9e63597e54e510d439667011b16bafbb750489243e84bfad7a5a202" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.612672 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-02f9-account-create-update-g5jhl" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.614640 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.615174 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8855-account-create-update-jv4rw" event={"ID":"988c20f0-d6bf-4819-b2ba-4323f7a428af","Type":"ContainerDied","Data":"b6294d7cc7158afdbe2980c439c8ceabc8217affebd3c74878606f97a531ba63"} Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.615329 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6294d7cc7158afdbe2980c439c8ceabc8217affebd3c74878606f97a531ba63" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.617185 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-47f4p" event={"ID":"46797207-aaf7-442a-a249-caa3998a37cb","Type":"ContainerDied","Data":"15cb832f40c404be15f29df01c98c8ad355347dba3747e462c1e339d528cff6c"} Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.617202 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-47f4p" Jan 28 17:24:10 crc kubenswrapper[4903]: I0128 17:24:10.617206 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15cb832f40c404be15f29df01c98c8ad355347dba3747e462c1e339d528cff6c" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.063700 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-22tmw"] Jan 28 17:24:16 crc kubenswrapper[4903]: E0128 17:24:16.064689 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac2962a-7b79-419a-a524-5d2b6b3d3a8b" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.064707 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac2962a-7b79-419a-a524-5d2b6b3d3a8b" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: E0128 17:24:16.064740 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258fac9e-ef70-4e82-8767-1858cf6272b6" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.064748 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="258fac9e-ef70-4e82-8767-1858cf6272b6" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: E0128 17:24:16.064759 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46797207-aaf7-442a-a249-caa3998a37cb" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.064766 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="46797207-aaf7-442a-a249-caa3998a37cb" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: E0128 17:24:16.064786 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d122611d-0720-468d-8841-174e00f898fe" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.064793 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d122611d-0720-468d-8841-174e00f898fe" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: E0128 17:24:16.064805 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10367b1a-b989-4e5b-b159-de422134c172" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.064812 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="10367b1a-b989-4e5b-b159-de422134c172" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: E0128 17:24:16.064822 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988c20f0-d6bf-4819-b2ba-4323f7a428af" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.064828 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="988c20f0-d6bf-4819-b2ba-4323f7a428af" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.065014 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="10367b1a-b989-4e5b-b159-de422134c172" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.065032 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="46797207-aaf7-442a-a249-caa3998a37cb" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.065050 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac2962a-7b79-419a-a524-5d2b6b3d3a8b" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.065063 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="258fac9e-ef70-4e82-8767-1858cf6272b6" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.065079 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="988c20f0-d6bf-4819-b2ba-4323f7a428af" containerName="mariadb-account-create-update" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.065095 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d122611d-0720-468d-8841-174e00f898fe" containerName="mariadb-database-create" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.065835 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.068015 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-6zdt9" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.069094 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.071584 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.078876 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-22tmw"] Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.154175 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-config-data\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.154517 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.154634 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g4w4\" (UniqueName: \"kubernetes.io/projected/8d0f0f8b-1f17-443b-97b2-c32776d01176-kube-api-access-9g4w4\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.154793 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-scripts\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.256586 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-config-data\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.257077 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.257231 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g4w4\" (UniqueName: \"kubernetes.io/projected/8d0f0f8b-1f17-443b-97b2-c32776d01176-kube-api-access-9g4w4\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.257388 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-scripts\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.262885 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-scripts\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.267798 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-config-data\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.268886 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.281456 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g4w4\" (UniqueName: \"kubernetes.io/projected/8d0f0f8b-1f17-443b-97b2-c32776d01176-kube-api-access-9g4w4\") pod \"nova-cell0-conductor-db-sync-22tmw\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.390586 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:16 crc kubenswrapper[4903]: I0128 17:24:16.842990 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-22tmw"] Jan 28 17:24:17 crc kubenswrapper[4903]: I0128 17:24:17.413990 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:24:17 crc kubenswrapper[4903]: E0128 17:24:17.414629 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:24:17 crc kubenswrapper[4903]: I0128 17:24:17.667545 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-22tmw" event={"ID":"8d0f0f8b-1f17-443b-97b2-c32776d01176","Type":"ContainerStarted","Data":"ebc0ca2d53c97ece2aea016eb29096d2faf029a539004227c48bae623ffb0725"} Jan 28 17:24:17 crc kubenswrapper[4903]: I0128 17:24:17.667591 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-22tmw" event={"ID":"8d0f0f8b-1f17-443b-97b2-c32776d01176","Type":"ContainerStarted","Data":"70a531d9bfb2f329bdef626bf476b0e826d2e6dfc0a13cf488ac45172b79cdd5"} Jan 28 17:24:17 crc kubenswrapper[4903]: I0128 17:24:17.686921 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-22tmw" podStartSLOduration=1.686904843 podStartE2EDuration="1.686904843s" podCreationTimestamp="2026-01-28 17:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:17.682868453 +0000 UTC m=+5929.958839984" watchObservedRunningTime="2026-01-28 17:24:17.686904843 +0000 UTC m=+5929.962876344" Jan 28 17:24:22 crc kubenswrapper[4903]: I0128 17:24:22.708849 4903 generic.go:334] "Generic (PLEG): container finished" podID="8d0f0f8b-1f17-443b-97b2-c32776d01176" containerID="ebc0ca2d53c97ece2aea016eb29096d2faf029a539004227c48bae623ffb0725" exitCode=0 Jan 28 17:24:22 crc kubenswrapper[4903]: I0128 17:24:22.708907 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-22tmw" event={"ID":"8d0f0f8b-1f17-443b-97b2-c32776d01176","Type":"ContainerDied","Data":"ebc0ca2d53c97ece2aea016eb29096d2faf029a539004227c48bae623ffb0725"} Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.002693 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.105713 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g4w4\" (UniqueName: \"kubernetes.io/projected/8d0f0f8b-1f17-443b-97b2-c32776d01176-kube-api-access-9g4w4\") pod \"8d0f0f8b-1f17-443b-97b2-c32776d01176\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.105772 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-combined-ca-bundle\") pod \"8d0f0f8b-1f17-443b-97b2-c32776d01176\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.105872 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-scripts\") pod \"8d0f0f8b-1f17-443b-97b2-c32776d01176\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.105955 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-config-data\") pod \"8d0f0f8b-1f17-443b-97b2-c32776d01176\" (UID: \"8d0f0f8b-1f17-443b-97b2-c32776d01176\") " Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.110858 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-scripts" (OuterVolumeSpecName: "scripts") pod "8d0f0f8b-1f17-443b-97b2-c32776d01176" (UID: "8d0f0f8b-1f17-443b-97b2-c32776d01176"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.111224 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0f0f8b-1f17-443b-97b2-c32776d01176-kube-api-access-9g4w4" (OuterVolumeSpecName: "kube-api-access-9g4w4") pod "8d0f0f8b-1f17-443b-97b2-c32776d01176" (UID: "8d0f0f8b-1f17-443b-97b2-c32776d01176"). InnerVolumeSpecName "kube-api-access-9g4w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.133650 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-config-data" (OuterVolumeSpecName: "config-data") pod "8d0f0f8b-1f17-443b-97b2-c32776d01176" (UID: "8d0f0f8b-1f17-443b-97b2-c32776d01176"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.135780 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d0f0f8b-1f17-443b-97b2-c32776d01176" (UID: "8d0f0f8b-1f17-443b-97b2-c32776d01176"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.207505 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g4w4\" (UniqueName: \"kubernetes.io/projected/8d0f0f8b-1f17-443b-97b2-c32776d01176-kube-api-access-9g4w4\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.207558 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.207570 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.207582 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d0f0f8b-1f17-443b-97b2-c32776d01176-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.724748 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-22tmw" event={"ID":"8d0f0f8b-1f17-443b-97b2-c32776d01176","Type":"ContainerDied","Data":"70a531d9bfb2f329bdef626bf476b0e826d2e6dfc0a13cf488ac45172b79cdd5"} Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.724791 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a531d9bfb2f329bdef626bf476b0e826d2e6dfc0a13cf488ac45172b79cdd5" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.724833 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-22tmw" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.799381 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 17:24:24 crc kubenswrapper[4903]: E0128 17:24:24.799730 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0f0f8b-1f17-443b-97b2-c32776d01176" containerName="nova-cell0-conductor-db-sync" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.799746 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0f0f8b-1f17-443b-97b2-c32776d01176" containerName="nova-cell0-conductor-db-sync" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.799904 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0f0f8b-1f17-443b-97b2-c32776d01176" containerName="nova-cell0-conductor-db-sync" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.800418 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.803922 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.803995 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-6zdt9" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.812843 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.818430 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnszz\" (UniqueName: \"kubernetes.io/projected/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-kube-api-access-vnszz\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.818612 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.818687 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.920638 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnszz\" (UniqueName: \"kubernetes.io/projected/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-kube-api-access-vnszz\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.920984 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.921024 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.924649 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.924915 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:24 crc kubenswrapper[4903]: I0128 17:24:24.939178 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnszz\" (UniqueName: \"kubernetes.io/projected/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-kube-api-access-vnszz\") pod \"nova-cell0-conductor-0\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:25 crc kubenswrapper[4903]: I0128 17:24:25.122372 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:25 crc kubenswrapper[4903]: I0128 17:24:25.571410 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 17:24:25 crc kubenswrapper[4903]: I0128 17:24:25.735015 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ad5e0d41-5311-4d00-b9e8-69915bf46fd9","Type":"ContainerStarted","Data":"793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535"} Jan 28 17:24:25 crc kubenswrapper[4903]: I0128 17:24:25.735066 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ad5e0d41-5311-4d00-b9e8-69915bf46fd9","Type":"ContainerStarted","Data":"4aa7b922254f89ee4d5261b6806b3b4153b9b04c3f34d730a51a2e6703fe50a6"} Jan 28 17:24:25 crc kubenswrapper[4903]: I0128 17:24:25.735365 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:25 crc kubenswrapper[4903]: I0128 17:24:25.762555 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.7625154379999999 podStartE2EDuration="1.762515438s" podCreationTimestamp="2026-01-28 17:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:25.755848067 +0000 UTC m=+5938.031819578" watchObservedRunningTime="2026-01-28 17:24:25.762515438 +0000 UTC m=+5938.038486949" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.149854 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.412944 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:24:30 crc kubenswrapper[4903]: E0128 17:24:30.413194 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.621144 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-n4xkj"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.623155 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.626953 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.627766 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.630277 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-n4xkj"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.731812 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swkmb\" (UniqueName: \"kubernetes.io/projected/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-kube-api-access-swkmb\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.731925 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-config-data\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.732256 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-scripts\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.732412 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.789031 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.790681 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.795003 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.818169 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.819856 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.826607 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.836220 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.836270 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swkmb\" (UniqueName: \"kubernetes.io/projected/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-kube-api-access-swkmb\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.836315 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-config-data\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.836476 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-scripts\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.849343 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.863343 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-scripts\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.863355 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-config-data\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.865680 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.884330 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swkmb\" (UniqueName: \"kubernetes.io/projected/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-kube-api-access-swkmb\") pod \"nova-cell0-cell-mapping-n4xkj\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.897613 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.938896 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.938955 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27770986-feba-4f5f-871b-94400975d141-logs\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.938988 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66c9k\" (UniqueName: \"kubernetes.io/projected/876399ae-39a6-4764-a49c-63589faf9445-kube-api-access-66c9k\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.939023 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-config-data\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.939079 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txt6\" (UniqueName: \"kubernetes.io/projected/27770986-feba-4f5f-871b-94400975d141-kube-api-access-8txt6\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.939140 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/876399ae-39a6-4764-a49c-63589faf9445-logs\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.939175 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.939270 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-config-data\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.953066 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.956376 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.957615 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.964408 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.971413 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.985704 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59dfb8bbdc-wqzhl"] Jan 28 17:24:30 crc kubenswrapper[4903]: I0128 17:24:30.987417 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.038975 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dfb8bbdc-wqzhl"] Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042512 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-config-data\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042605 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-config-data\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042656 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042703 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27770986-feba-4f5f-871b-94400975d141-logs\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042732 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66c9k\" (UniqueName: \"kubernetes.io/projected/876399ae-39a6-4764-a49c-63589faf9445-kube-api-access-66c9k\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042765 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-config-data\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042804 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grqj7\" (UniqueName: \"kubernetes.io/projected/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-kube-api-access-grqj7\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042859 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txt6\" (UniqueName: \"kubernetes.io/projected/27770986-feba-4f5f-871b-94400975d141-kube-api-access-8txt6\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042913 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/876399ae-39a6-4764-a49c-63589faf9445-logs\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.042957 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.043014 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.044072 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27770986-feba-4f5f-871b-94400975d141-logs\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.045112 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/876399ae-39a6-4764-a49c-63589faf9445-logs\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.058227 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-config-data\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.060164 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-config-data\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.063561 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.063926 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.071608 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.073287 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.078995 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.083155 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66c9k\" (UniqueName: \"kubernetes.io/projected/876399ae-39a6-4764-a49c-63589faf9445-kube-api-access-66c9k\") pod \"nova-metadata-0\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.083229 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.087175 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txt6\" (UniqueName: \"kubernetes.io/projected/27770986-feba-4f5f-871b-94400975d141-kube-api-access-8txt6\") pod \"nova-api-0\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.121680 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148296 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148577 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-config\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148632 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148657 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-sb\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148727 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-config-data\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148768 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-nb\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148801 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-dns-svc\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148839 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4ng\" (UniqueName: \"kubernetes.io/projected/aa7692b0-11b6-4799-8cb5-36b15433a134-kube-api-access-bj4ng\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.148866 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grqj7\" (UniqueName: \"kubernetes.io/projected/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-kube-api-access-grqj7\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.153523 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-config-data\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.157199 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.177922 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grqj7\" (UniqueName: \"kubernetes.io/projected/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-kube-api-access-grqj7\") pod \"nova-scheduler-0\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.250804 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-config\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.250866 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-sb\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.250944 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-nb\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.250989 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.251018 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-dns-svc\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.251052 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.251084 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4ng\" (UniqueName: \"kubernetes.io/projected/aa7692b0-11b6-4799-8cb5-36b15433a134-kube-api-access-bj4ng\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.251183 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/1b34a22a-5117-4bda-915a-074ccd531b90-kube-api-access-vn92n\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.253550 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-config\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.254377 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-sb\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.254697 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-nb\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.255284 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-dns-svc\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.273166 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4ng\" (UniqueName: \"kubernetes.io/projected/aa7692b0-11b6-4799-8cb5-36b15433a134-kube-api-access-bj4ng\") pod \"dnsmasq-dns-59dfb8bbdc-wqzhl\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.352556 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.352659 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.352760 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/1b34a22a-5117-4bda-915a-074ccd531b90-kube-api-access-vn92n\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.357254 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.357812 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.369392 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/1b34a22a-5117-4bda-915a-074ccd531b90-kube-api-access-vn92n\") pod \"nova-cell1-novncproxy-0\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.479375 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.509782 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.520637 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.602838 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-n4xkj"] Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.728262 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.744727 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.822023 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27770986-feba-4f5f-871b-94400975d141","Type":"ContainerStarted","Data":"5d0a067e88e804936a4b2df61ee044874a1666f95a0423610063cc914b36bb43"} Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.824026 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n4xkj" event={"ID":"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a","Type":"ContainerStarted","Data":"818391f7879023355fe2577f5eb1e9a1ec258f38b643e2f7598dd4746b1a8c60"} Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.827310 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"876399ae-39a6-4764-a49c-63589faf9445","Type":"ContainerStarted","Data":"57407a2576ab135808fddcfed8a2b07f1132990c71b8bc8a60dcc8e2683f9f00"} Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.892170 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9wdl4"] Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.908201 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.917212 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.917516 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 17:24:31 crc kubenswrapper[4903]: I0128 17:24:31.925766 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9wdl4"] Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.003741 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.082327 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-scripts\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.082377 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.082458 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-config-data\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.082602 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2m7\" (UniqueName: \"kubernetes.io/projected/96ea63b1-7931-4420-89b7-a6577ca2076f-kube-api-access-gn2m7\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.159337 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59dfb8bbdc-wqzhl"] Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.170319 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.198401 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-scripts\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.198446 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.198540 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-config-data\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.198648 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2m7\" (UniqueName: \"kubernetes.io/projected/96ea63b1-7931-4420-89b7-a6577ca2076f-kube-api-access-gn2m7\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.211110 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.211431 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-scripts\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.215845 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-config-data\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.219892 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2m7\" (UniqueName: \"kubernetes.io/projected/96ea63b1-7931-4420-89b7-a6577ca2076f-kube-api-access-gn2m7\") pod \"nova-cell1-conductor-db-sync-9wdl4\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.498757 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.874318 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n4xkj" event={"ID":"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a","Type":"ContainerStarted","Data":"85a23c76bb3a1355f28a4831ce2ad54a729e7770b12866c43ba03ae93f690f9d"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.883400 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"876399ae-39a6-4764-a49c-63589faf9445","Type":"ContainerStarted","Data":"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.883452 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"876399ae-39a6-4764-a49c-63589faf9445","Type":"ContainerStarted","Data":"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.887966 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b34a22a-5117-4bda-915a-074ccd531b90","Type":"ContainerStarted","Data":"18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.888350 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b34a22a-5117-4bda-915a-074ccd531b90","Type":"ContainerStarted","Data":"d341bc02a4311f5d441703bbf004110593925c8ca4beabdfc1f4651b53cf19da"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.894843 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e38176c3-f52a-4a86-8f6a-6e3740ba81e6","Type":"ContainerStarted","Data":"1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.894904 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e38176c3-f52a-4a86-8f6a-6e3740ba81e6","Type":"ContainerStarted","Data":"6d73d8f5658bc03700ff2354110658618b9911a30157192ca5822e1ff987ef08"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.902349 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-n4xkj" podStartSLOduration=2.9023038679999997 podStartE2EDuration="2.902303868s" podCreationTimestamp="2026-01-28 17:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:32.891084444 +0000 UTC m=+5945.167055955" watchObservedRunningTime="2026-01-28 17:24:32.902303868 +0000 UTC m=+5945.178275379" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.913011 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.912993708 podStartE2EDuration="2.912993708s" podCreationTimestamp="2026-01-28 17:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:32.91047457 +0000 UTC m=+5945.186446081" watchObservedRunningTime="2026-01-28 17:24:32.912993708 +0000 UTC m=+5945.188965219" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.914206 4903 generic.go:334] "Generic (PLEG): container finished" podID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerID="088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0" exitCode=0 Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.914329 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" event={"ID":"aa7692b0-11b6-4799-8cb5-36b15433a134","Type":"ContainerDied","Data":"088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.914363 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" event={"ID":"aa7692b0-11b6-4799-8cb5-36b15433a134","Type":"ContainerStarted","Data":"fe305b5b7d501391af9c401f7aecc181d7010e2c39fb2e8a3bc26dd361c25379"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.931514 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27770986-feba-4f5f-871b-94400975d141","Type":"ContainerStarted","Data":"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.931573 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27770986-feba-4f5f-871b-94400975d141","Type":"ContainerStarted","Data":"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151"} Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.964158 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.964134535 podStartE2EDuration="2.964134535s" podCreationTimestamp="2026-01-28 17:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:32.939872577 +0000 UTC m=+5945.215844088" watchObservedRunningTime="2026-01-28 17:24:32.964134535 +0000 UTC m=+5945.240106046" Jan 28 17:24:32 crc kubenswrapper[4903]: I0128 17:24:32.977222 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.977203779 podStartE2EDuration="2.977203779s" podCreationTimestamp="2026-01-28 17:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:32.96209576 +0000 UTC m=+5945.238067261" watchObservedRunningTime="2026-01-28 17:24:32.977203779 +0000 UTC m=+5945.253175290" Jan 28 17:24:33 crc kubenswrapper[4903]: I0128 17:24:33.001822 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.001800297 podStartE2EDuration="3.001800297s" podCreationTimestamp="2026-01-28 17:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:32.983774618 +0000 UTC m=+5945.259746129" watchObservedRunningTime="2026-01-28 17:24:33.001800297 +0000 UTC m=+5945.277771808" Jan 28 17:24:33 crc kubenswrapper[4903]: I0128 17:24:33.083505 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9wdl4"] Jan 28 17:24:33 crc kubenswrapper[4903]: I0128 17:24:33.941484 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" event={"ID":"96ea63b1-7931-4420-89b7-a6577ca2076f","Type":"ContainerStarted","Data":"f3d88e81a1c88d8dfc98ce2b982579535fbb70753657bb71991f7570229545d3"} Jan 28 17:24:33 crc kubenswrapper[4903]: I0128 17:24:33.941891 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" event={"ID":"96ea63b1-7931-4420-89b7-a6577ca2076f","Type":"ContainerStarted","Data":"082a10d5c04efec93e9ea5ad4266fc8ca017e10a7fb9bef377141acbe0214a12"} Jan 28 17:24:33 crc kubenswrapper[4903]: I0128 17:24:33.943804 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" event={"ID":"aa7692b0-11b6-4799-8cb5-36b15433a134","Type":"ContainerStarted","Data":"02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3"} Jan 28 17:24:33 crc kubenswrapper[4903]: I0128 17:24:33.959613 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" podStartSLOduration=2.959591878 podStartE2EDuration="2.959591878s" podCreationTimestamp="2026-01-28 17:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:33.958881488 +0000 UTC m=+5946.234852999" watchObservedRunningTime="2026-01-28 17:24:33.959591878 +0000 UTC m=+5946.235563389" Jan 28 17:24:33 crc kubenswrapper[4903]: I0128 17:24:33.986266 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" podStartSLOduration=3.98622073 podStartE2EDuration="3.98622073s" podCreationTimestamp="2026-01-28 17:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:33.977567825 +0000 UTC m=+5946.253539346" watchObservedRunningTime="2026-01-28 17:24:33.98622073 +0000 UTC m=+5946.262192241" Jan 28 17:24:34 crc kubenswrapper[4903]: I0128 17:24:34.953574 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.039586 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.039812 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1b34a22a-5117-4bda-915a-074ccd531b90" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a" gracePeriod=30 Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.053176 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.053424 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-log" containerID="cri-o://151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395" gracePeriod=30 Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.053491 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-metadata" containerID="cri-o://3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885" gracePeriod=30 Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.688280 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.702551 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-combined-ca-bundle\") pod \"876399ae-39a6-4764-a49c-63589faf9445\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.702656 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-config-data\") pod \"876399ae-39a6-4764-a49c-63589faf9445\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.702696 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66c9k\" (UniqueName: \"kubernetes.io/projected/876399ae-39a6-4764-a49c-63589faf9445-kube-api-access-66c9k\") pod \"876399ae-39a6-4764-a49c-63589faf9445\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.702880 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/876399ae-39a6-4764-a49c-63589faf9445-logs\") pod \"876399ae-39a6-4764-a49c-63589faf9445\" (UID: \"876399ae-39a6-4764-a49c-63589faf9445\") " Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.703790 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/876399ae-39a6-4764-a49c-63589faf9445-logs" (OuterVolumeSpecName: "logs") pod "876399ae-39a6-4764-a49c-63589faf9445" (UID: "876399ae-39a6-4764-a49c-63589faf9445"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.712248 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876399ae-39a6-4764-a49c-63589faf9445-kube-api-access-66c9k" (OuterVolumeSpecName: "kube-api-access-66c9k") pod "876399ae-39a6-4764-a49c-63589faf9445" (UID: "876399ae-39a6-4764-a49c-63589faf9445"). InnerVolumeSpecName "kube-api-access-66c9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.739865 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-config-data" (OuterVolumeSpecName: "config-data") pod "876399ae-39a6-4764-a49c-63589faf9445" (UID: "876399ae-39a6-4764-a49c-63589faf9445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.748167 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.748483 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "876399ae-39a6-4764-a49c-63589faf9445" (UID: "876399ae-39a6-4764-a49c-63589faf9445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.804900 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/1b34a22a-5117-4bda-915a-074ccd531b90-kube-api-access-vn92n\") pod \"1b34a22a-5117-4bda-915a-074ccd531b90\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.805046 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-combined-ca-bundle\") pod \"1b34a22a-5117-4bda-915a-074ccd531b90\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.805117 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-config-data\") pod \"1b34a22a-5117-4bda-915a-074ccd531b90\" (UID: \"1b34a22a-5117-4bda-915a-074ccd531b90\") " Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.805749 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/876399ae-39a6-4764-a49c-63589faf9445-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.805772 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.805785 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/876399ae-39a6-4764-a49c-63589faf9445-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.805796 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66c9k\" (UniqueName: \"kubernetes.io/projected/876399ae-39a6-4764-a49c-63589faf9445-kube-api-access-66c9k\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.809296 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b34a22a-5117-4bda-915a-074ccd531b90-kube-api-access-vn92n" (OuterVolumeSpecName: "kube-api-access-vn92n") pod "1b34a22a-5117-4bda-915a-074ccd531b90" (UID: "1b34a22a-5117-4bda-915a-074ccd531b90"). InnerVolumeSpecName "kube-api-access-vn92n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.836719 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-config-data" (OuterVolumeSpecName: "config-data") pod "1b34a22a-5117-4bda-915a-074ccd531b90" (UID: "1b34a22a-5117-4bda-915a-074ccd531b90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.842458 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b34a22a-5117-4bda-915a-074ccd531b90" (UID: "1b34a22a-5117-4bda-915a-074ccd531b90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.907181 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn92n\" (UniqueName: \"kubernetes.io/projected/1b34a22a-5117-4bda-915a-074ccd531b90-kube-api-access-vn92n\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.907221 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.907234 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b34a22a-5117-4bda-915a-074ccd531b90-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.967332 4903 generic.go:334] "Generic (PLEG): container finished" podID="1b34a22a-5117-4bda-915a-074ccd531b90" containerID="18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a" exitCode=0 Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.967521 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b34a22a-5117-4bda-915a-074ccd531b90","Type":"ContainerDied","Data":"18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a"} Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.967841 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1b34a22a-5117-4bda-915a-074ccd531b90","Type":"ContainerDied","Data":"d341bc02a4311f5d441703bbf004110593925c8ca4beabdfc1f4651b53cf19da"} Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.967866 4903 scope.go:117] "RemoveContainer" containerID="18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.967631 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.974453 4903 generic.go:334] "Generic (PLEG): container finished" podID="876399ae-39a6-4764-a49c-63589faf9445" containerID="3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885" exitCode=0 Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.974493 4903 generic.go:334] "Generic (PLEG): container finished" podID="876399ae-39a6-4764-a49c-63589faf9445" containerID="151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395" exitCode=143 Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.974776 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.974796 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"876399ae-39a6-4764-a49c-63589faf9445","Type":"ContainerDied","Data":"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885"} Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.974854 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"876399ae-39a6-4764-a49c-63589faf9445","Type":"ContainerDied","Data":"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395"} Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.974867 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"876399ae-39a6-4764-a49c-63589faf9445","Type":"ContainerDied","Data":"57407a2576ab135808fddcfed8a2b07f1132990c71b8bc8a60dcc8e2683f9f00"} Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.998059 4903 scope.go:117] "RemoveContainer" containerID="18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a" Jan 28 17:24:35 crc kubenswrapper[4903]: E0128 17:24:35.998731 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a\": container with ID starting with 18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a not found: ID does not exist" containerID="18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.998795 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a"} err="failed to get container status \"18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a\": rpc error: code = NotFound desc = could not find container \"18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a\": container with ID starting with 18c12acb14972ccb8989a94ed849c7177d0c5da13917dca890077da2d33c253a not found: ID does not exist" Jan 28 17:24:35 crc kubenswrapper[4903]: I0128 17:24:35.998833 4903 scope.go:117] "RemoveContainer" containerID="3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.008282 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.020454 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.021260 4903 scope.go:117] "RemoveContainer" containerID="151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.037630 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.052801 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.062602 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: E0128 17:24:36.063041 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b34a22a-5117-4bda-915a-074ccd531b90" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.063058 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b34a22a-5117-4bda-915a-074ccd531b90" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 17:24:36 crc kubenswrapper[4903]: E0128 17:24:36.063076 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-log" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.063083 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-log" Jan 28 17:24:36 crc kubenswrapper[4903]: E0128 17:24:36.063104 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-metadata" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.063110 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-metadata" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.063349 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-log" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.063380 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b34a22a-5117-4bda-915a-074ccd531b90" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.063403 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="876399ae-39a6-4764-a49c-63589faf9445" containerName="nova-metadata-metadata" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.064626 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.067645 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.069827 4903 scope.go:117] "RemoveContainer" containerID="3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.072839 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.074107 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: E0128 17:24:36.074830 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885\": container with ID starting with 3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885 not found: ID does not exist" containerID="3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.074872 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.074874 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885"} err="failed to get container status \"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885\": rpc error: code = NotFound desc = could not find container \"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885\": container with ID starting with 3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885 not found: ID does not exist" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.074905 4903 scope.go:117] "RemoveContainer" containerID="151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395" Jan 28 17:24:36 crc kubenswrapper[4903]: E0128 17:24:36.075857 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395\": container with ID starting with 151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395 not found: ID does not exist" containerID="151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.075893 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395"} err="failed to get container status \"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395\": rpc error: code = NotFound desc = could not find container \"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395\": container with ID starting with 151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395 not found: ID does not exist" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.075915 4903 scope.go:117] "RemoveContainer" containerID="3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.078293 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.078455 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.078650 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.078804 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885"} err="failed to get container status \"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885\": rpc error: code = NotFound desc = could not find container \"3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885\": container with ID starting with 3946c11f332124054ec2cfa4142a94e03c87b366c09dabb3c79b6ef6a755d885 not found: ID does not exist" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.078844 4903 scope.go:117] "RemoveContainer" containerID="151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.085730 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395"} err="failed to get container status \"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395\": rpc error: code = NotFound desc = could not find container \"151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395\": container with ID starting with 151e0949fe1641988798e1a10bc7556e7aaba82188ae17a77b35c8002443e395 not found: ID does not exist" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.096232 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.106723 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.111693 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.111744 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b26e0f-801c-44b9-81bd-f584c967b888-logs\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.111788 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.111812 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.111918 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5cgt\" (UniqueName: \"kubernetes.io/projected/9352b280-1bea-4c59-9a84-16dcd9807cc1-kube-api-access-b5cgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.111970 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.111994 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d442n\" (UniqueName: \"kubernetes.io/projected/68b26e0f-801c-44b9-81bd-f584c967b888-kube-api-access-d442n\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.112023 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.112059 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-config-data\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.112078 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.213965 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214007 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d442n\" (UniqueName: \"kubernetes.io/projected/68b26e0f-801c-44b9-81bd-f584c967b888-kube-api-access-d442n\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214033 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214067 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-config-data\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214084 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214121 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214144 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b26e0f-801c-44b9-81bd-f584c967b888-logs\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214183 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214204 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.214242 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5cgt\" (UniqueName: \"kubernetes.io/projected/9352b280-1bea-4c59-9a84-16dcd9807cc1-kube-api-access-b5cgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.216343 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b26e0f-801c-44b9-81bd-f584c967b888-logs\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.219795 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.220103 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.220854 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.221076 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.224969 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.225400 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-config-data\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.227005 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9352b280-1bea-4c59-9a84-16dcd9807cc1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.231457 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5cgt\" (UniqueName: \"kubernetes.io/projected/9352b280-1bea-4c59-9a84-16dcd9807cc1-kube-api-access-b5cgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"9352b280-1bea-4c59-9a84-16dcd9807cc1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.239693 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d442n\" (UniqueName: \"kubernetes.io/projected/68b26e0f-801c-44b9-81bd-f584c967b888-kube-api-access-d442n\") pod \"nova-metadata-0\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.402751 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.418062 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.431379 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b34a22a-5117-4bda-915a-074ccd531b90" path="/var/lib/kubelet/pods/1b34a22a-5117-4bda-915a-074ccd531b90/volumes" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.433955 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="876399ae-39a6-4764-a49c-63589faf9445" path="/var/lib/kubelet/pods/876399ae-39a6-4764-a49c-63589faf9445/volumes" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.480861 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.953875 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: W0128 17:24:36.953889 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9352b280_1bea_4c59_9a84_16dcd9807cc1.slice/crio-0e5131697ddc33c3fc840df4e3aecc7951e11e99841af7b46e0ed9f0326d638f WatchSource:0}: Error finding container 0e5131697ddc33c3fc840df4e3aecc7951e11e99841af7b46e0ed9f0326d638f: Status 404 returned error can't find the container with id 0e5131697ddc33c3fc840df4e3aecc7951e11e99841af7b46e0ed9f0326d638f Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.981387 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.983305 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9352b280-1bea-4c59-9a84-16dcd9807cc1","Type":"ContainerStarted","Data":"0e5131697ddc33c3fc840df4e3aecc7951e11e99841af7b46e0ed9f0326d638f"} Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.987341 4903 generic.go:334] "Generic (PLEG): container finished" podID="96ea63b1-7931-4420-89b7-a6577ca2076f" containerID="f3d88e81a1c88d8dfc98ce2b982579535fbb70753657bb71991f7570229545d3" exitCode=0 Jan 28 17:24:36 crc kubenswrapper[4903]: I0128 17:24:36.987411 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" event={"ID":"96ea63b1-7931-4420-89b7-a6577ca2076f","Type":"ContainerDied","Data":"f3d88e81a1c88d8dfc98ce2b982579535fbb70753657bb71991f7570229545d3"} Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.009598 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9352b280-1bea-4c59-9a84-16dcd9807cc1","Type":"ContainerStarted","Data":"367411597e72ae73ed14dad2f988c67bb905b791e4b6f5b03dd1d46deae868f3"} Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.016239 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68b26e0f-801c-44b9-81bd-f584c967b888","Type":"ContainerStarted","Data":"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b"} Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.016331 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68b26e0f-801c-44b9-81bd-f584c967b888","Type":"ContainerStarted","Data":"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472"} Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.016351 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68b26e0f-801c-44b9-81bd-f584c967b888","Type":"ContainerStarted","Data":"73cf0689d727044a3cbfa7b8e036d02b1bea907d6e1e2f5e166f2d2af4083088"} Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.019429 4903 generic.go:334] "Generic (PLEG): container finished" podID="393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" containerID="85a23c76bb3a1355f28a4831ce2ad54a729e7770b12866c43ba03ae93f690f9d" exitCode=0 Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.019671 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n4xkj" event={"ID":"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a","Type":"ContainerDied","Data":"85a23c76bb3a1355f28a4831ce2ad54a729e7770b12866c43ba03ae93f690f9d"} Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.044813 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.04477968 podStartE2EDuration="2.04477968s" podCreationTimestamp="2026-01-28 17:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:38.03519125 +0000 UTC m=+5950.311162791" watchObservedRunningTime="2026-01-28 17:24:38.04477968 +0000 UTC m=+5950.320751201" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.073301 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.073278163 podStartE2EDuration="2.073278163s" podCreationTimestamp="2026-01-28 17:24:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:38.068655508 +0000 UTC m=+5950.344627039" watchObservedRunningTime="2026-01-28 17:24:38.073278163 +0000 UTC m=+5950.349249674" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.404779 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.462519 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-scripts\") pod \"96ea63b1-7931-4420-89b7-a6577ca2076f\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.462636 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-config-data\") pod \"96ea63b1-7931-4420-89b7-a6577ca2076f\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.463329 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2m7\" (UniqueName: \"kubernetes.io/projected/96ea63b1-7931-4420-89b7-a6577ca2076f-kube-api-access-gn2m7\") pod \"96ea63b1-7931-4420-89b7-a6577ca2076f\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.463429 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-combined-ca-bundle\") pod \"96ea63b1-7931-4420-89b7-a6577ca2076f\" (UID: \"96ea63b1-7931-4420-89b7-a6577ca2076f\") " Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.470842 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-scripts" (OuterVolumeSpecName: "scripts") pod "96ea63b1-7931-4420-89b7-a6577ca2076f" (UID: "96ea63b1-7931-4420-89b7-a6577ca2076f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.471086 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ea63b1-7931-4420-89b7-a6577ca2076f-kube-api-access-gn2m7" (OuterVolumeSpecName: "kube-api-access-gn2m7") pod "96ea63b1-7931-4420-89b7-a6577ca2076f" (UID: "96ea63b1-7931-4420-89b7-a6577ca2076f"). InnerVolumeSpecName "kube-api-access-gn2m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.496837 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-config-data" (OuterVolumeSpecName: "config-data") pod "96ea63b1-7931-4420-89b7-a6577ca2076f" (UID: "96ea63b1-7931-4420-89b7-a6577ca2076f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.510973 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96ea63b1-7931-4420-89b7-a6577ca2076f" (UID: "96ea63b1-7931-4420-89b7-a6577ca2076f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.566301 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.566344 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.566391 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2m7\" (UniqueName: \"kubernetes.io/projected/96ea63b1-7931-4420-89b7-a6577ca2076f-kube-api-access-gn2m7\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:38 crc kubenswrapper[4903]: I0128 17:24:38.566403 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ea63b1-7931-4420-89b7-a6577ca2076f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.031358 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.042509 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9wdl4" event={"ID":"96ea63b1-7931-4420-89b7-a6577ca2076f","Type":"ContainerDied","Data":"082a10d5c04efec93e9ea5ad4266fc8ca017e10a7fb9bef377141acbe0214a12"} Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.042597 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="082a10d5c04efec93e9ea5ad4266fc8ca017e10a7fb9bef377141acbe0214a12" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.108582 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 17:24:39 crc kubenswrapper[4903]: E0128 17:24:39.109467 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ea63b1-7931-4420-89b7-a6577ca2076f" containerName="nova-cell1-conductor-db-sync" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.109497 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ea63b1-7931-4420-89b7-a6577ca2076f" containerName="nova-cell1-conductor-db-sync" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.116073 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ea63b1-7931-4420-89b7-a6577ca2076f" containerName="nova-cell1-conductor-db-sync" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.117485 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.121186 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.140467 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.177810 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.178101 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.178166 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhdpg\" (UniqueName: \"kubernetes.io/projected/9a06e697-989a-4142-b291-83e72a63b996-kube-api-access-nhdpg\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.280228 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.280307 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhdpg\" (UniqueName: \"kubernetes.io/projected/9a06e697-989a-4142-b291-83e72a63b996-kube-api-access-nhdpg\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.280374 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.291127 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.300101 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhdpg\" (UniqueName: \"kubernetes.io/projected/9a06e697-989a-4142-b291-83e72a63b996-kube-api-access-nhdpg\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.304010 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.442797 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.535710 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.584014 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swkmb\" (UniqueName: \"kubernetes.io/projected/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-kube-api-access-swkmb\") pod \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.584182 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-config-data\") pod \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.584219 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-combined-ca-bundle\") pod \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.584249 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-scripts\") pod \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\" (UID: \"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a\") " Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.588833 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-kube-api-access-swkmb" (OuterVolumeSpecName: "kube-api-access-swkmb") pod "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" (UID: "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a"). InnerVolumeSpecName "kube-api-access-swkmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.591669 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-scripts" (OuterVolumeSpecName: "scripts") pod "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" (UID: "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.630701 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-config-data" (OuterVolumeSpecName: "config-data") pod "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" (UID: "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.637650 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" (UID: "393ab6f9-40fb-4c36-a6c9-a2bff0096e9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.688180 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.688374 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.688422 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.688442 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swkmb\" (UniqueName: \"kubernetes.io/projected/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a-kube-api-access-swkmb\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:39 crc kubenswrapper[4903]: I0128 17:24:39.902245 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 17:24:39 crc kubenswrapper[4903]: W0128 17:24:39.903087 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a06e697_989a_4142_b291_83e72a63b996.slice/crio-afc2b2beceb14cf4e7ea1f7c450288776b74ebc23e29ee45ee29141782346da6 WatchSource:0}: Error finding container afc2b2beceb14cf4e7ea1f7c450288776b74ebc23e29ee45ee29141782346da6: Status 404 returned error can't find the container with id afc2b2beceb14cf4e7ea1f7c450288776b74ebc23e29ee45ee29141782346da6 Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.040934 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-n4xkj" Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.042187 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-n4xkj" event={"ID":"393ab6f9-40fb-4c36-a6c9-a2bff0096e9a","Type":"ContainerDied","Data":"818391f7879023355fe2577f5eb1e9a1ec258f38b643e2f7598dd4746b1a8c60"} Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.042232 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818391f7879023355fe2577f5eb1e9a1ec258f38b643e2f7598dd4746b1a8c60" Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.043317 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9a06e697-989a-4142-b291-83e72a63b996","Type":"ContainerStarted","Data":"afc2b2beceb14cf4e7ea1f7c450288776b74ebc23e29ee45ee29141782346da6"} Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.263395 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.263685 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-log" containerID="cri-o://53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151" gracePeriod=30 Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.263773 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-api" containerID="cri-o://47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3" gracePeriod=30 Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.326602 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.326854 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e38176c3-f52a-4a86-8f6a-6e3740ba81e6" containerName="nova-scheduler-scheduler" containerID="cri-o://1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1" gracePeriod=30 Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.339102 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.339370 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-log" containerID="cri-o://53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472" gracePeriod=30 Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.339568 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-metadata" containerID="cri-o://bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b" gracePeriod=30 Jan 28 17:24:40 crc kubenswrapper[4903]: I0128 17:24:40.905906 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.001316 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.031855 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8txt6\" (UniqueName: \"kubernetes.io/projected/27770986-feba-4f5f-871b-94400975d141-kube-api-access-8txt6\") pod \"27770986-feba-4f5f-871b-94400975d141\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.032212 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27770986-feba-4f5f-871b-94400975d141-logs\") pod \"27770986-feba-4f5f-871b-94400975d141\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.032297 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-config-data\") pod \"27770986-feba-4f5f-871b-94400975d141\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.032384 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-combined-ca-bundle\") pod \"27770986-feba-4f5f-871b-94400975d141\" (UID: \"27770986-feba-4f5f-871b-94400975d141\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.032671 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27770986-feba-4f5f-871b-94400975d141-logs" (OuterVolumeSpecName: "logs") pod "27770986-feba-4f5f-871b-94400975d141" (UID: "27770986-feba-4f5f-871b-94400975d141"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.036079 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27770986-feba-4f5f-871b-94400975d141-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.037520 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27770986-feba-4f5f-871b-94400975d141-kube-api-access-8txt6" (OuterVolumeSpecName: "kube-api-access-8txt6") pod "27770986-feba-4f5f-871b-94400975d141" (UID: "27770986-feba-4f5f-871b-94400975d141"). InnerVolumeSpecName "kube-api-access-8txt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.063374 4903 generic.go:334] "Generic (PLEG): container finished" podID="68b26e0f-801c-44b9-81bd-f584c967b888" containerID="bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b" exitCode=0 Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.063415 4903 generic.go:334] "Generic (PLEG): container finished" podID="68b26e0f-801c-44b9-81bd-f584c967b888" containerID="53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472" exitCode=143 Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.063457 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.063476 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68b26e0f-801c-44b9-81bd-f584c967b888","Type":"ContainerDied","Data":"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b"} Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.063605 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68b26e0f-801c-44b9-81bd-f584c967b888","Type":"ContainerDied","Data":"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472"} Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.063653 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68b26e0f-801c-44b9-81bd-f584c967b888","Type":"ContainerDied","Data":"73cf0689d727044a3cbfa7b8e036d02b1bea907d6e1e2f5e166f2d2af4083088"} Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.063682 4903 scope.go:117] "RemoveContainer" containerID="bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.067447 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9a06e697-989a-4142-b291-83e72a63b996","Type":"ContainerStarted","Data":"da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16"} Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.067600 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.076310 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-config-data" (OuterVolumeSpecName: "config-data") pod "27770986-feba-4f5f-871b-94400975d141" (UID: "27770986-feba-4f5f-871b-94400975d141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.079479 4903 generic.go:334] "Generic (PLEG): container finished" podID="27770986-feba-4f5f-871b-94400975d141" containerID="47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3" exitCode=0 Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.079505 4903 generic.go:334] "Generic (PLEG): container finished" podID="27770986-feba-4f5f-871b-94400975d141" containerID="53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151" exitCode=143 Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.079538 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27770986-feba-4f5f-871b-94400975d141","Type":"ContainerDied","Data":"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3"} Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.079561 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27770986-feba-4f5f-871b-94400975d141","Type":"ContainerDied","Data":"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151"} Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.079574 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27770986-feba-4f5f-871b-94400975d141","Type":"ContainerDied","Data":"5d0a067e88e804936a4b2df61ee044874a1666f95a0423610063cc914b36bb43"} Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.079623 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.093158 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27770986-feba-4f5f-871b-94400975d141" (UID: "27770986-feba-4f5f-871b-94400975d141"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.097224 4903 scope.go:117] "RemoveContainer" containerID="53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.097850 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.097828474 podStartE2EDuration="2.097828474s" podCreationTimestamp="2026-01-28 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:41.094125514 +0000 UTC m=+5953.370097025" watchObservedRunningTime="2026-01-28 17:24:41.097828474 +0000 UTC m=+5953.373799985" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.114310 4903 scope.go:117] "RemoveContainer" containerID="bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.114953 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b\": container with ID starting with bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b not found: ID does not exist" containerID="bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.114991 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b"} err="failed to get container status \"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b\": rpc error: code = NotFound desc = could not find container \"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b\": container with ID starting with bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.115016 4903 scope.go:117] "RemoveContainer" containerID="53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.115415 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472\": container with ID starting with 53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472 not found: ID does not exist" containerID="53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.115445 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472"} err="failed to get container status \"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472\": rpc error: code = NotFound desc = could not find container \"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472\": container with ID starting with 53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472 not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.115465 4903 scope.go:117] "RemoveContainer" containerID="bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.115829 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b"} err="failed to get container status \"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b\": rpc error: code = NotFound desc = could not find container \"bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b\": container with ID starting with bb0e46914bff0fb6e0da17d3e3603b9293ec9cd632269fc9d1af70adef42dd1b not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.115856 4903 scope.go:117] "RemoveContainer" containerID="53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.116300 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472"} err="failed to get container status \"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472\": rpc error: code = NotFound desc = could not find container \"53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472\": container with ID starting with 53571e52a36a3c8bc776ad8543efd9fd1d8397caa1c912d86336efbb6a023472 not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.116322 4903 scope.go:117] "RemoveContainer" containerID="47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.136214 4903 scope.go:117] "RemoveContainer" containerID="53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.136723 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-nova-metadata-tls-certs\") pod \"68b26e0f-801c-44b9-81bd-f584c967b888\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.136887 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b26e0f-801c-44b9-81bd-f584c967b888-logs\") pod \"68b26e0f-801c-44b9-81bd-f584c967b888\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.136919 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-combined-ca-bundle\") pod \"68b26e0f-801c-44b9-81bd-f584c967b888\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.136967 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d442n\" (UniqueName: \"kubernetes.io/projected/68b26e0f-801c-44b9-81bd-f584c967b888-kube-api-access-d442n\") pod \"68b26e0f-801c-44b9-81bd-f584c967b888\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.137003 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-config-data\") pod \"68b26e0f-801c-44b9-81bd-f584c967b888\" (UID: \"68b26e0f-801c-44b9-81bd-f584c967b888\") " Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.137343 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b26e0f-801c-44b9-81bd-f584c967b888-logs" (OuterVolumeSpecName: "logs") pod "68b26e0f-801c-44b9-81bd-f584c967b888" (UID: "68b26e0f-801c-44b9-81bd-f584c967b888"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.138137 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8txt6\" (UniqueName: \"kubernetes.io/projected/27770986-feba-4f5f-871b-94400975d141-kube-api-access-8txt6\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.138163 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b26e0f-801c-44b9-81bd-f584c967b888-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.138313 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.138326 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27770986-feba-4f5f-871b-94400975d141-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.140874 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b26e0f-801c-44b9-81bd-f584c967b888-kube-api-access-d442n" (OuterVolumeSpecName: "kube-api-access-d442n") pod "68b26e0f-801c-44b9-81bd-f584c967b888" (UID: "68b26e0f-801c-44b9-81bd-f584c967b888"). InnerVolumeSpecName "kube-api-access-d442n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.164556 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-config-data" (OuterVolumeSpecName: "config-data") pod "68b26e0f-801c-44b9-81bd-f584c967b888" (UID: "68b26e0f-801c-44b9-81bd-f584c967b888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.172274 4903 scope.go:117] "RemoveContainer" containerID="47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.172592 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3\": container with ID starting with 47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3 not found: ID does not exist" containerID="47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.172622 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3"} err="failed to get container status \"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3\": rpc error: code = NotFound desc = could not find container \"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3\": container with ID starting with 47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3 not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.172642 4903 scope.go:117] "RemoveContainer" containerID="53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.172816 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151\": container with ID starting with 53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151 not found: ID does not exist" containerID="53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.172837 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151"} err="failed to get container status \"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151\": rpc error: code = NotFound desc = could not find container \"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151\": container with ID starting with 53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151 not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.172849 4903 scope.go:117] "RemoveContainer" containerID="47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.173000 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3"} err="failed to get container status \"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3\": rpc error: code = NotFound desc = could not find container \"47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3\": container with ID starting with 47e64a37832f96bd53588760d3bda87257d9a682a488ae7f0ee83ccafa0c77f3 not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.173019 4903 scope.go:117] "RemoveContainer" containerID="53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.173162 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151"} err="failed to get container status \"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151\": rpc error: code = NotFound desc = could not find container \"53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151\": container with ID starting with 53084a8650ec33f2dd7a764db3ea0b19da14ac0c162ca5b2aa4d4eac1c11e151 not found: ID does not exist" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.174820 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68b26e0f-801c-44b9-81bd-f584c967b888" (UID: "68b26e0f-801c-44b9-81bd-f584c967b888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.205855 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "68b26e0f-801c-44b9-81bd-f584c967b888" (UID: "68b26e0f-801c-44b9-81bd-f584c967b888"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.240821 4903 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.240864 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.240876 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d442n\" (UniqueName: \"kubernetes.io/projected/68b26e0f-801c-44b9-81bd-f584c967b888-kube-api-access-d442n\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.240884 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b26e0f-801c-44b9-81bd-f584c967b888-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.419147 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.452685 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.475621 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.489332 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.503264 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.513096 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.515917 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.516262 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" containerName="nova-manage" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516273 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" containerName="nova-manage" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.516290 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-metadata" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516297 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-metadata" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.516309 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-api" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516315 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-api" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.516335 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-log" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516341 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-log" Jan 28 17:24:41 crc kubenswrapper[4903]: E0128 17:24:41.516355 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-log" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516361 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-log" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516510 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-log" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516544 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-log" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516555 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" containerName="nova-metadata-metadata" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516565 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="27770986-feba-4f5f-871b-94400975d141" containerName="nova-api-api" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.516575 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" containerName="nova-manage" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.517492 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.520075 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.526791 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.528581 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.530770 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.531289 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.536078 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.549677 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.595521 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb6d4cc67-7zkv2"] Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.596129 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" containerName="dnsmasq-dns" containerID="cri-o://25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1" gracePeriod=10 Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.651495 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-config-data\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.651691 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.652124 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.652237 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8499017e-3250-43f2-a8f9-d1f082b721d8-logs\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.652289 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636f99d3-1a1b-4672-bb60-891b6af33c36-logs\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.652316 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9bkm\" (UniqueName: \"kubernetes.io/projected/636f99d3-1a1b-4672-bb60-891b6af33c36-kube-api-access-f9bkm\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.652361 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-config-data\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.652390 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9lh\" (UniqueName: \"kubernetes.io/projected/8499017e-3250-43f2-a8f9-d1f082b721d8-kube-api-access-lp9lh\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.652435 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.754855 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8499017e-3250-43f2-a8f9-d1f082b721d8-logs\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.754925 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636f99d3-1a1b-4672-bb60-891b6af33c36-logs\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.754951 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9bkm\" (UniqueName: \"kubernetes.io/projected/636f99d3-1a1b-4672-bb60-891b6af33c36-kube-api-access-f9bkm\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.754977 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-config-data\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.755007 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9lh\" (UniqueName: \"kubernetes.io/projected/8499017e-3250-43f2-a8f9-d1f082b721d8-kube-api-access-lp9lh\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.755038 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.755117 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-config-data\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.755148 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.755214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.755686 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8499017e-3250-43f2-a8f9-d1f082b721d8-logs\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.755951 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636f99d3-1a1b-4672-bb60-891b6af33c36-logs\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.763007 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.763655 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.763932 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-config-data\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.764699 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.776573 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-config-data\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.777811 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9lh\" (UniqueName: \"kubernetes.io/projected/8499017e-3250-43f2-a8f9-d1f082b721d8-kube-api-access-lp9lh\") pod \"nova-api-0\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.778389 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9bkm\" (UniqueName: \"kubernetes.io/projected/636f99d3-1a1b-4672-bb60-891b6af33c36-kube-api-access-f9bkm\") pod \"nova-metadata-0\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " pod="openstack/nova-metadata-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.832825 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:24:41 crc kubenswrapper[4903]: I0128 17:24:41.859740 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.042303 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.097895 4903 generic.go:334] "Generic (PLEG): container finished" podID="28bcef49-09f5-4d52-b6d5-022be9688809" containerID="25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1" exitCode=0 Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.098569 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.098876 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" event={"ID":"28bcef49-09f5-4d52-b6d5-022be9688809","Type":"ContainerDied","Data":"25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1"} Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.098945 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" event={"ID":"28bcef49-09f5-4d52-b6d5-022be9688809","Type":"ContainerDied","Data":"fa936205419dedd9b88c2c4b03211693777fa879ae55640747ccccde51caf489"} Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.098990 4903 scope.go:117] "RemoveContainer" containerID="25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.138739 4903 scope.go:117] "RemoveContainer" containerID="ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.165585 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-nb\") pod \"28bcef49-09f5-4d52-b6d5-022be9688809\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.165721 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-sb\") pod \"28bcef49-09f5-4d52-b6d5-022be9688809\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.165829 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-config\") pod \"28bcef49-09f5-4d52-b6d5-022be9688809\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.165917 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-dns-svc\") pod \"28bcef49-09f5-4d52-b6d5-022be9688809\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.165941 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wxhw\" (UniqueName: \"kubernetes.io/projected/28bcef49-09f5-4d52-b6d5-022be9688809-kube-api-access-6wxhw\") pod \"28bcef49-09f5-4d52-b6d5-022be9688809\" (UID: \"28bcef49-09f5-4d52-b6d5-022be9688809\") " Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.174975 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bcef49-09f5-4d52-b6d5-022be9688809-kube-api-access-6wxhw" (OuterVolumeSpecName: "kube-api-access-6wxhw") pod "28bcef49-09f5-4d52-b6d5-022be9688809" (UID: "28bcef49-09f5-4d52-b6d5-022be9688809"). InnerVolumeSpecName "kube-api-access-6wxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.178829 4903 scope.go:117] "RemoveContainer" containerID="25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1" Jan 28 17:24:42 crc kubenswrapper[4903]: E0128 17:24:42.182842 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1\": container with ID starting with 25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1 not found: ID does not exist" containerID="25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.182875 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1"} err="failed to get container status \"25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1\": rpc error: code = NotFound desc = could not find container \"25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1\": container with ID starting with 25430db547f660554113457ad0221de6cda4cc6b8f661dba70e429f8dac7c4b1 not found: ID does not exist" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.182897 4903 scope.go:117] "RemoveContainer" containerID="ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15" Jan 28 17:24:42 crc kubenswrapper[4903]: E0128 17:24:42.183904 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15\": container with ID starting with ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15 not found: ID does not exist" containerID="ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.183929 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15"} err="failed to get container status \"ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15\": rpc error: code = NotFound desc = could not find container \"ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15\": container with ID starting with ddade47bb85f8c0fb19c6477a812d98a65e414e1a75bdcd2defb01cf4ba8fc15 not found: ID does not exist" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.212871 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28bcef49-09f5-4d52-b6d5-022be9688809" (UID: "28bcef49-09f5-4d52-b6d5-022be9688809"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.219893 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-config" (OuterVolumeSpecName: "config") pod "28bcef49-09f5-4d52-b6d5-022be9688809" (UID: "28bcef49-09f5-4d52-b6d5-022be9688809"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.220727 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28bcef49-09f5-4d52-b6d5-022be9688809" (UID: "28bcef49-09f5-4d52-b6d5-022be9688809"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.227057 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28bcef49-09f5-4d52-b6d5-022be9688809" (UID: "28bcef49-09f5-4d52-b6d5-022be9688809"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.269944 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.270161 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.270376 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wxhw\" (UniqueName: \"kubernetes.io/projected/28bcef49-09f5-4d52-b6d5-022be9688809-kube-api-access-6wxhw\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.270418 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.270429 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28bcef49-09f5-4d52-b6d5-022be9688809-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.368350 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:42 crc kubenswrapper[4903]: W0128 17:24:42.372818 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8499017e_3250_43f2_a8f9_d1f082b721d8.slice/crio-ed8b9a6ccb0a35d0afc06592ad5ea3bf16d0008a55efaf954dac38ef967429fe WatchSource:0}: Error finding container ed8b9a6ccb0a35d0afc06592ad5ea3bf16d0008a55efaf954dac38ef967429fe: Status 404 returned error can't find the container with id ed8b9a6ccb0a35d0afc06592ad5ea3bf16d0008a55efaf954dac38ef967429fe Jan 28 17:24:42 crc kubenswrapper[4903]: W0128 17:24:42.428567 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636f99d3_1a1b_4672_bb60_891b6af33c36.slice/crio-8316713aaeafb2042d79d8378a0d93e67ef44ce94b71e208dd607371553eeb9b WatchSource:0}: Error finding container 8316713aaeafb2042d79d8378a0d93e67ef44ce94b71e208dd607371553eeb9b: Status 404 returned error can't find the container with id 8316713aaeafb2042d79d8378a0d93e67ef44ce94b71e208dd607371553eeb9b Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.430816 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27770986-feba-4f5f-871b-94400975d141" path="/var/lib/kubelet/pods/27770986-feba-4f5f-871b-94400975d141/volumes" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.431581 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b26e0f-801c-44b9-81bd-f584c967b888" path="/var/lib/kubelet/pods/68b26e0f-801c-44b9-81bd-f584c967b888/volumes" Jan 28 17:24:42 crc kubenswrapper[4903]: I0128 17:24:42.432409 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.108106 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"636f99d3-1a1b-4672-bb60-891b6af33c36","Type":"ContainerStarted","Data":"186f1feaa0b666f0bdc3dcd1aec4ac6ce780f2c18b8997b67c277cc0102e267a"} Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.108470 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"636f99d3-1a1b-4672-bb60-891b6af33c36","Type":"ContainerStarted","Data":"f42cb245ce1f3d6d1ba33b358c06aa4bb03f9ad77faae1e053dece83d035ae4c"} Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.108487 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"636f99d3-1a1b-4672-bb60-891b6af33c36","Type":"ContainerStarted","Data":"8316713aaeafb2042d79d8378a0d93e67ef44ce94b71e208dd607371553eeb9b"} Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.110570 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8499017e-3250-43f2-a8f9-d1f082b721d8","Type":"ContainerStarted","Data":"af0492989d99eb71ded5472f0d26466ad0211c811e1c282be74f357d9631af4b"} Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.110613 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8499017e-3250-43f2-a8f9-d1f082b721d8","Type":"ContainerStarted","Data":"d37440abfd282df5799f942eb878d28eb693d3dcd1093461f2759c914c224219"} Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.110622 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8499017e-3250-43f2-a8f9-d1f082b721d8","Type":"ContainerStarted","Data":"ed8b9a6ccb0a35d0afc06592ad5ea3bf16d0008a55efaf954dac38ef967429fe"} Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.130413 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.130395139 podStartE2EDuration="2.130395139s" podCreationTimestamp="2026-01-28 17:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:43.127577932 +0000 UTC m=+5955.403549443" watchObservedRunningTime="2026-01-28 17:24:43.130395139 +0000 UTC m=+5955.406366650" Jan 28 17:24:43 crc kubenswrapper[4903]: I0128 17:24:43.154811 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.15479196 podStartE2EDuration="2.15479196s" podCreationTimestamp="2026-01-28 17:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:43.149488626 +0000 UTC m=+5955.425460137" watchObservedRunningTime="2026-01-28 17:24:43.15479196 +0000 UTC m=+5955.430763471" Jan 28 17:24:44 crc kubenswrapper[4903]: I0128 17:24:44.413382 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:24:44 crc kubenswrapper[4903]: E0128 17:24:44.413998 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:24:46 crc kubenswrapper[4903]: I0128 17:24:46.424346 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:46 crc kubenswrapper[4903]: I0128 17:24:46.436653 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:46 crc kubenswrapper[4903]: I0128 17:24:46.861366 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 17:24:46 crc kubenswrapper[4903]: I0128 17:24:46.861429 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 17:24:47 crc kubenswrapper[4903]: I0128 17:24:47.162230 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.466958 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.915449 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-gg9sn"] Jan 28 17:24:49 crc kubenswrapper[4903]: E0128 17:24:49.915829 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" containerName="init" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.915849 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" containerName="init" Jan 28 17:24:49 crc kubenswrapper[4903]: E0128 17:24:49.915869 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" containerName="dnsmasq-dns" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.915877 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" containerName="dnsmasq-dns" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.916098 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" containerName="dnsmasq-dns" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.916772 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.918428 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.919484 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 17:24:49 crc kubenswrapper[4903]: I0128 17:24:49.933710 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gg9sn"] Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.018367 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.018428 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-scripts\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.018472 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-config-data\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.018522 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48gp\" (UniqueName: \"kubernetes.io/projected/48dcc322-2413-4bfb-a717-25c8fcb8bebb-kube-api-access-f48gp\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.119994 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48gp\" (UniqueName: \"kubernetes.io/projected/48dcc322-2413-4bfb-a717-25c8fcb8bebb-kube-api-access-f48gp\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.120562 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.120628 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-scripts\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.120690 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-config-data\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.127714 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.130903 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-scripts\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.143614 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-config-data\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.154085 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48gp\" (UniqueName: \"kubernetes.io/projected/48dcc322-2413-4bfb-a717-25c8fcb8bebb-kube-api-access-f48gp\") pod \"nova-cell1-cell-mapping-gg9sn\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.247123 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:50 crc kubenswrapper[4903]: I0128 17:24:50.728522 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gg9sn"] Jan 28 17:24:50 crc kubenswrapper[4903]: W0128 17:24:50.732026 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48dcc322_2413_4bfb_a717_25c8fcb8bebb.slice/crio-f7e9c8f8d8d3535b1edf4157568ba720eae850e40a18b64e06dbfdd1665e4c52 WatchSource:0}: Error finding container f7e9c8f8d8d3535b1edf4157568ba720eae850e40a18b64e06dbfdd1665e4c52: Status 404 returned error can't find the container with id f7e9c8f8d8d3535b1edf4157568ba720eae850e40a18b64e06dbfdd1665e4c52 Jan 28 17:24:51 crc kubenswrapper[4903]: I0128 17:24:51.180006 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gg9sn" event={"ID":"48dcc322-2413-4bfb-a717-25c8fcb8bebb","Type":"ContainerStarted","Data":"bac783114cdd7d12e7cf7e386aebbce96e54ae3fa8bb9ced35922b06fa260eef"} Jan 28 17:24:51 crc kubenswrapper[4903]: I0128 17:24:51.180364 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gg9sn" event={"ID":"48dcc322-2413-4bfb-a717-25c8fcb8bebb","Type":"ContainerStarted","Data":"f7e9c8f8d8d3535b1edf4157568ba720eae850e40a18b64e06dbfdd1665e4c52"} Jan 28 17:24:51 crc kubenswrapper[4903]: I0128 17:24:51.203266 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-gg9sn" podStartSLOduration=2.203246937 podStartE2EDuration="2.203246937s" podCreationTimestamp="2026-01-28 17:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:24:51.19782537 +0000 UTC m=+5963.473796881" watchObservedRunningTime="2026-01-28 17:24:51.203246937 +0000 UTC m=+5963.479218448" Jan 28 17:24:51 crc kubenswrapper[4903]: I0128 17:24:51.834261 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 17:24:51 crc kubenswrapper[4903]: I0128 17:24:51.834328 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 17:24:51 crc kubenswrapper[4903]: I0128 17:24:51.861295 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 17:24:51 crc kubenswrapper[4903]: I0128 17:24:51.861349 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 17:24:52 crc kubenswrapper[4903]: I0128 17:24:52.929895 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.80:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 17:24:52 crc kubenswrapper[4903]: I0128 17:24:52.929884 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.79:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:24:52 crc kubenswrapper[4903]: I0128 17:24:52.930196 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.79:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:24:52 crc kubenswrapper[4903]: I0128 17:24:52.930229 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.80:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 17:24:54 crc kubenswrapper[4903]: I0128 17:24:54.571710 4903 scope.go:117] "RemoveContainer" containerID="52e35fb21df20d0136d93a5ea43e22dc5ac41a80bf42076d9d2fd67c2e7681d6" Jan 28 17:24:54 crc kubenswrapper[4903]: I0128 17:24:54.597319 4903 scope.go:117] "RemoveContainer" containerID="a9d6924bc2d76fbeb685535a61ab1ccdd5728d5c6768be5b0ceb3bdd135abc8f" Jan 28 17:24:56 crc kubenswrapper[4903]: I0128 17:24:56.231376 4903 generic.go:334] "Generic (PLEG): container finished" podID="48dcc322-2413-4bfb-a717-25c8fcb8bebb" containerID="bac783114cdd7d12e7cf7e386aebbce96e54ae3fa8bb9ced35922b06fa260eef" exitCode=0 Jan 28 17:24:56 crc kubenswrapper[4903]: I0128 17:24:56.231482 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gg9sn" event={"ID":"48dcc322-2413-4bfb-a717-25c8fcb8bebb","Type":"ContainerDied","Data":"bac783114cdd7d12e7cf7e386aebbce96e54ae3fa8bb9ced35922b06fa260eef"} Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.413914 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:24:57 crc kubenswrapper[4903]: E0128 17:24:57.414232 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.591242 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.675744 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f48gp\" (UniqueName: \"kubernetes.io/projected/48dcc322-2413-4bfb-a717-25c8fcb8bebb-kube-api-access-f48gp\") pod \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.675941 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-scripts\") pod \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.675981 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-config-data\") pod \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.676057 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-combined-ca-bundle\") pod \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\" (UID: \"48dcc322-2413-4bfb-a717-25c8fcb8bebb\") " Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.682989 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48dcc322-2413-4bfb-a717-25c8fcb8bebb-kube-api-access-f48gp" (OuterVolumeSpecName: "kube-api-access-f48gp") pod "48dcc322-2413-4bfb-a717-25c8fcb8bebb" (UID: "48dcc322-2413-4bfb-a717-25c8fcb8bebb"). InnerVolumeSpecName "kube-api-access-f48gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.688690 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-scripts" (OuterVolumeSpecName: "scripts") pod "48dcc322-2413-4bfb-a717-25c8fcb8bebb" (UID: "48dcc322-2413-4bfb-a717-25c8fcb8bebb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.713179 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48dcc322-2413-4bfb-a717-25c8fcb8bebb" (UID: "48dcc322-2413-4bfb-a717-25c8fcb8bebb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.718215 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-config-data" (OuterVolumeSpecName: "config-data") pod "48dcc322-2413-4bfb-a717-25c8fcb8bebb" (UID: "48dcc322-2413-4bfb-a717-25c8fcb8bebb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.778355 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.778394 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f48gp\" (UniqueName: \"kubernetes.io/projected/48dcc322-2413-4bfb-a717-25c8fcb8bebb-kube-api-access-f48gp\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.778406 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:57 crc kubenswrapper[4903]: I0128 17:24:57.778415 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dcc322-2413-4bfb-a717-25c8fcb8bebb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.254673 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gg9sn" event={"ID":"48dcc322-2413-4bfb-a717-25c8fcb8bebb","Type":"ContainerDied","Data":"f7e9c8f8d8d3535b1edf4157568ba720eae850e40a18b64e06dbfdd1665e4c52"} Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.254708 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e9c8f8d8d3535b1edf4157568ba720eae850e40a18b64e06dbfdd1665e4c52" Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.254740 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gg9sn" Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.489049 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.489396 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-log" containerID="cri-o://d37440abfd282df5799f942eb878d28eb693d3dcd1093461f2759c914c224219" gracePeriod=30 Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.489397 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-api" containerID="cri-o://af0492989d99eb71ded5472f0d26466ad0211c811e1c282be74f357d9631af4b" gracePeriod=30 Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.509103 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.509909 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-log" containerID="cri-o://f42cb245ce1f3d6d1ba33b358c06aa4bb03f9ad77faae1e053dece83d035ae4c" gracePeriod=30 Jan 28 17:24:58 crc kubenswrapper[4903]: I0128 17:24:58.510018 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-metadata" containerID="cri-o://186f1feaa0b666f0bdc3dcd1aec4ac6ce780f2c18b8997b67c277cc0102e267a" gracePeriod=30 Jan 28 17:24:59 crc kubenswrapper[4903]: I0128 17:24:59.265229 4903 generic.go:334] "Generic (PLEG): container finished" podID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerID="f42cb245ce1f3d6d1ba33b358c06aa4bb03f9ad77faae1e053dece83d035ae4c" exitCode=143 Jan 28 17:24:59 crc kubenswrapper[4903]: I0128 17:24:59.265318 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"636f99d3-1a1b-4672-bb60-891b6af33c36","Type":"ContainerDied","Data":"f42cb245ce1f3d6d1ba33b358c06aa4bb03f9ad77faae1e053dece83d035ae4c"} Jan 28 17:24:59 crc kubenswrapper[4903]: I0128 17:24:59.267244 4903 generic.go:334] "Generic (PLEG): container finished" podID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerID="d37440abfd282df5799f942eb878d28eb693d3dcd1093461f2759c914c224219" exitCode=143 Jan 28 17:24:59 crc kubenswrapper[4903]: I0128 17:24:59.267364 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8499017e-3250-43f2-a8f9-d1f082b721d8","Type":"ContainerDied","Data":"d37440abfd282df5799f942eb878d28eb693d3dcd1093461f2759c914c224219"} Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.382229 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.385778 4903 generic.go:334] "Generic (PLEG): container finished" podID="e38176c3-f52a-4a86-8f6a-6e3740ba81e6" containerID="1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1" exitCode=137 Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.385824 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e38176c3-f52a-4a86-8f6a-6e3740ba81e6","Type":"ContainerDied","Data":"1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1"} Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.385858 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e38176c3-f52a-4a86-8f6a-6e3740ba81e6","Type":"ContainerDied","Data":"6d73d8f5658bc03700ff2354110658618b9911a30157192ca5822e1ff987ef08"} Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.385877 4903 scope.go:117] "RemoveContainer" containerID="1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.385934 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.414312 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:25:11 crc kubenswrapper[4903]: E0128 17:25:11.414625 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.416125 4903 scope.go:117] "RemoveContainer" containerID="1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1" Jan 28 17:25:11 crc kubenswrapper[4903]: E0128 17:25:11.416630 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1\": container with ID starting with 1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1 not found: ID does not exist" containerID="1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.416664 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1"} err="failed to get container status \"1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1\": rpc error: code = NotFound desc = could not find container \"1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1\": container with ID starting with 1b55256eb1164e4661b1e6ec641ca454ba0455f30292b33a7e9b4d3721fce2d1 not found: ID does not exist" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.433280 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grqj7\" (UniqueName: \"kubernetes.io/projected/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-kube-api-access-grqj7\") pod \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.433332 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-config-data\") pod \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.433455 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-combined-ca-bundle\") pod \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\" (UID: \"e38176c3-f52a-4a86-8f6a-6e3740ba81e6\") " Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.448491 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-kube-api-access-grqj7" (OuterVolumeSpecName: "kube-api-access-grqj7") pod "e38176c3-f52a-4a86-8f6a-6e3740ba81e6" (UID: "e38176c3-f52a-4a86-8f6a-6e3740ba81e6"). InnerVolumeSpecName "kube-api-access-grqj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.464879 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e38176c3-f52a-4a86-8f6a-6e3740ba81e6" (UID: "e38176c3-f52a-4a86-8f6a-6e3740ba81e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.467330 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-config-data" (OuterVolumeSpecName: "config-data") pod "e38176c3-f52a-4a86-8f6a-6e3740ba81e6" (UID: "e38176c3-f52a-4a86-8f6a-6e3740ba81e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.536727 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.537476 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.537885 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grqj7\" (UniqueName: \"kubernetes.io/projected/e38176c3-f52a-4a86-8f6a-6e3740ba81e6-kube-api-access-grqj7\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.723771 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.733681 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.746865 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:25:11 crc kubenswrapper[4903]: E0128 17:25:11.747273 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48dcc322-2413-4bfb-a717-25c8fcb8bebb" containerName="nova-manage" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.747298 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="48dcc322-2413-4bfb-a717-25c8fcb8bebb" containerName="nova-manage" Jan 28 17:25:11 crc kubenswrapper[4903]: E0128 17:25:11.747324 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38176c3-f52a-4a86-8f6a-6e3740ba81e6" containerName="nova-scheduler-scheduler" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.747331 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38176c3-f52a-4a86-8f6a-6e3740ba81e6" containerName="nova-scheduler-scheduler" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.747567 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38176c3-f52a-4a86-8f6a-6e3740ba81e6" containerName="nova-scheduler-scheduler" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.747591 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="48dcc322-2413-4bfb-a717-25c8fcb8bebb" containerName="nova-manage" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.748261 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.753137 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.763730 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.834232 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.834277 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.944647 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.944711 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-config-data\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:11 crc kubenswrapper[4903]: I0128 17:25:11.944856 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqzt\" (UniqueName: \"kubernetes.io/projected/ab3464ba-e769-4e18-a7ff-4c752456a9ee-kube-api-access-4mqzt\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.047022 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqzt\" (UniqueName: \"kubernetes.io/projected/ab3464ba-e769-4e18-a7ff-4c752456a9ee-kube-api-access-4mqzt\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.047148 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.047174 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-config-data\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.052411 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-config-data\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.052631 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.069978 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqzt\" (UniqueName: \"kubernetes.io/projected/ab3464ba-e769-4e18-a7ff-4c752456a9ee-kube-api-access-4mqzt\") pod \"nova-scheduler-0\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.368125 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.396602 4903 generic.go:334] "Generic (PLEG): container finished" podID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerID="af0492989d99eb71ded5472f0d26466ad0211c811e1c282be74f357d9631af4b" exitCode=0 Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.396669 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8499017e-3250-43f2-a8f9-d1f082b721d8","Type":"ContainerDied","Data":"af0492989d99eb71ded5472f0d26466ad0211c811e1c282be74f357d9631af4b"} Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.396696 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8499017e-3250-43f2-a8f9-d1f082b721d8","Type":"ContainerDied","Data":"ed8b9a6ccb0a35d0afc06592ad5ea3bf16d0008a55efaf954dac38ef967429fe"} Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.396708 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed8b9a6ccb0a35d0afc06592ad5ea3bf16d0008a55efaf954dac38ef967429fe" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.399469 4903 generic.go:334] "Generic (PLEG): container finished" podID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerID="186f1feaa0b666f0bdc3dcd1aec4ac6ce780f2c18b8997b67c277cc0102e267a" exitCode=0 Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.399504 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"636f99d3-1a1b-4672-bb60-891b6af33c36","Type":"ContainerDied","Data":"186f1feaa0b666f0bdc3dcd1aec4ac6ce780f2c18b8997b67c277cc0102e267a"} Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.399547 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"636f99d3-1a1b-4672-bb60-891b6af33c36","Type":"ContainerDied","Data":"8316713aaeafb2042d79d8378a0d93e67ef44ce94b71e208dd607371553eeb9b"} Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.399560 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8316713aaeafb2042d79d8378a0d93e67ef44ce94b71e208dd607371553eeb9b" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.401898 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.411480 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.429890 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38176c3-f52a-4a86-8f6a-6e3740ba81e6" path="/var/lib/kubelet/pods/e38176c3-f52a-4a86-8f6a-6e3740ba81e6/volumes" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.459962 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-config-data\") pod \"8499017e-3250-43f2-a8f9-d1f082b721d8\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.460008 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-config-data\") pod \"636f99d3-1a1b-4672-bb60-891b6af33c36\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.460053 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-combined-ca-bundle\") pod \"8499017e-3250-43f2-a8f9-d1f082b721d8\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.460149 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636f99d3-1a1b-4672-bb60-891b6af33c36-logs\") pod \"636f99d3-1a1b-4672-bb60-891b6af33c36\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.460194 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8499017e-3250-43f2-a8f9-d1f082b721d8-logs\") pod \"8499017e-3250-43f2-a8f9-d1f082b721d8\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.460235 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9bkm\" (UniqueName: \"kubernetes.io/projected/636f99d3-1a1b-4672-bb60-891b6af33c36-kube-api-access-f9bkm\") pod \"636f99d3-1a1b-4672-bb60-891b6af33c36\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.461160 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/636f99d3-1a1b-4672-bb60-891b6af33c36-logs" (OuterVolumeSpecName: "logs") pod "636f99d3-1a1b-4672-bb60-891b6af33c36" (UID: "636f99d3-1a1b-4672-bb60-891b6af33c36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.463518 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8499017e-3250-43f2-a8f9-d1f082b721d8-logs" (OuterVolumeSpecName: "logs") pod "8499017e-3250-43f2-a8f9-d1f082b721d8" (UID: "8499017e-3250-43f2-a8f9-d1f082b721d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.466206 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-combined-ca-bundle\") pod \"636f99d3-1a1b-4672-bb60-891b6af33c36\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.466550 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-nova-metadata-tls-certs\") pod \"636f99d3-1a1b-4672-bb60-891b6af33c36\" (UID: \"636f99d3-1a1b-4672-bb60-891b6af33c36\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.466658 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp9lh\" (UniqueName: \"kubernetes.io/projected/8499017e-3250-43f2-a8f9-d1f082b721d8-kube-api-access-lp9lh\") pod \"8499017e-3250-43f2-a8f9-d1f082b721d8\" (UID: \"8499017e-3250-43f2-a8f9-d1f082b721d8\") " Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.472446 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636f99d3-1a1b-4672-bb60-891b6af33c36-kube-api-access-f9bkm" (OuterVolumeSpecName: "kube-api-access-f9bkm") pod "636f99d3-1a1b-4672-bb60-891b6af33c36" (UID: "636f99d3-1a1b-4672-bb60-891b6af33c36"). InnerVolumeSpecName "kube-api-access-f9bkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.479191 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8499017e-3250-43f2-a8f9-d1f082b721d8-kube-api-access-lp9lh" (OuterVolumeSpecName: "kube-api-access-lp9lh") pod "8499017e-3250-43f2-a8f9-d1f082b721d8" (UID: "8499017e-3250-43f2-a8f9-d1f082b721d8"). InnerVolumeSpecName "kube-api-access-lp9lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.488079 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp9lh\" (UniqueName: \"kubernetes.io/projected/8499017e-3250-43f2-a8f9-d1f082b721d8-kube-api-access-lp9lh\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.488123 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636f99d3-1a1b-4672-bb60-891b6af33c36-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.488136 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8499017e-3250-43f2-a8f9-d1f082b721d8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.488147 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9bkm\" (UniqueName: \"kubernetes.io/projected/636f99d3-1a1b-4672-bb60-891b6af33c36-kube-api-access-f9bkm\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.526329 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8499017e-3250-43f2-a8f9-d1f082b721d8" (UID: "8499017e-3250-43f2-a8f9-d1f082b721d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.529703 4903 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod28bcef49-09f5-4d52-b6d5-022be9688809"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod28bcef49-09f5-4d52-b6d5-022be9688809] : Timed out while waiting for systemd to remove kubepods-besteffort-pod28bcef49_09f5_4d52_b6d5_022be9688809.slice" Jan 28 17:25:12 crc kubenswrapper[4903]: E0128 17:25:12.530124 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod28bcef49-09f5-4d52-b6d5-022be9688809] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod28bcef49-09f5-4d52-b6d5-022be9688809] : Timed out while waiting for systemd to remove kubepods-besteffort-pod28bcef49_09f5_4d52_b6d5_022be9688809.slice" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.533354 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-config-data" (OuterVolumeSpecName: "config-data") pod "636f99d3-1a1b-4672-bb60-891b6af33c36" (UID: "636f99d3-1a1b-4672-bb60-891b6af33c36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.534749 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-config-data" (OuterVolumeSpecName: "config-data") pod "8499017e-3250-43f2-a8f9-d1f082b721d8" (UID: "8499017e-3250-43f2-a8f9-d1f082b721d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.590404 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.590443 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.590452 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8499017e-3250-43f2-a8f9-d1f082b721d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.597686 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "636f99d3-1a1b-4672-bb60-891b6af33c36" (UID: "636f99d3-1a1b-4672-bb60-891b6af33c36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.619141 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "636f99d3-1a1b-4672-bb60-891b6af33c36" (UID: "636f99d3-1a1b-4672-bb60-891b6af33c36"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.693783 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.693827 4903 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/636f99d3-1a1b-4672-bb60-891b6af33c36-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:12 crc kubenswrapper[4903]: I0128 17:25:12.885926 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 17:25:12 crc kubenswrapper[4903]: W0128 17:25:12.890708 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab3464ba_e769_4e18_a7ff_4c752456a9ee.slice/crio-1511f11862de2bdb23980960168607932cba63cc8161760ac69d267032243bf9 WatchSource:0}: Error finding container 1511f11862de2bdb23980960168607932cba63cc8161760ac69d267032243bf9: Status 404 returned error can't find the container with id 1511f11862de2bdb23980960168607932cba63cc8161760ac69d267032243bf9 Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.411254 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab3464ba-e769-4e18-a7ff-4c752456a9ee","Type":"ContainerStarted","Data":"7a929a3c1472352096fed7e06d804435e586d826e013870d349fbcd417bf7df1"} Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.411626 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab3464ba-e769-4e18-a7ff-4c752456a9ee","Type":"ContainerStarted","Data":"1511f11862de2bdb23980960168607932cba63cc8161760ac69d267032243bf9"} Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.411358 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.411315 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb6d4cc67-7zkv2" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.411376 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.430280 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.430256606 podStartE2EDuration="2.430256606s" podCreationTimestamp="2026-01-28 17:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:25:13.427887793 +0000 UTC m=+5985.703859304" watchObservedRunningTime="2026-01-28 17:25:13.430256606 +0000 UTC m=+5985.706228127" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.546830 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb6d4cc67-7zkv2"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.583631 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb6d4cc67-7zkv2"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.617665 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.658623 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.685866 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: E0128 17:25:13.686341 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-log" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686357 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-log" Jan 28 17:25:13 crc kubenswrapper[4903]: E0128 17:25:13.686381 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-metadata" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686389 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-metadata" Jan 28 17:25:13 crc kubenswrapper[4903]: E0128 17:25:13.686402 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-log" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686409 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-log" Jan 28 17:25:13 crc kubenswrapper[4903]: E0128 17:25:13.686438 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-api" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686445 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-api" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686719 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-api" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686743 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-metadata" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686760 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" containerName="nova-api-log" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.686774 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" containerName="nova-metadata-log" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.687931 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.698390 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.721018 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.744029 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27dc8868-b267-4d4a-8857-84127d0f09dc-logs\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.744119 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-config-data\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.744273 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.744305 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqsc\" (UniqueName: \"kubernetes.io/projected/27dc8868-b267-4d4a-8857-84127d0f09dc-kube-api-access-tdqsc\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.744701 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.757077 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.767600 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.769173 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.771816 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.772095 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.773614 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.845749 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdqsc\" (UniqueName: \"kubernetes.io/projected/27dc8868-b267-4d4a-8857-84127d0f09dc-kube-api-access-tdqsc\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.845792 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.845821 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-config-data\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.845840 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvh4k\" (UniqueName: \"kubernetes.io/projected/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-kube-api-access-tvh4k\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.845952 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-logs\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.845982 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.846011 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27dc8868-b267-4d4a-8857-84127d0f09dc-logs\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.846036 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-config-data\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.846102 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.846635 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27dc8868-b267-4d4a-8857-84127d0f09dc-logs\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.851443 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.853626 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-config-data\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.871621 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdqsc\" (UniqueName: \"kubernetes.io/projected/27dc8868-b267-4d4a-8857-84127d0f09dc-kube-api-access-tdqsc\") pod \"nova-api-0\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " pod="openstack/nova-api-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.947748 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-logs\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.948095 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.948159 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-logs\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.948166 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.948227 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-config-data\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.948250 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvh4k\" (UniqueName: \"kubernetes.io/projected/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-kube-api-access-tvh4k\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.952323 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.953201 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.954315 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-config-data\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:13 crc kubenswrapper[4903]: I0128 17:25:13.964127 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvh4k\" (UniqueName: \"kubernetes.io/projected/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-kube-api-access-tvh4k\") pod \"nova-metadata-0\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " pod="openstack/nova-metadata-0" Jan 28 17:25:14 crc kubenswrapper[4903]: I0128 17:25:14.029784 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:14 crc kubenswrapper[4903]: I0128 17:25:14.087522 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 17:25:14 crc kubenswrapper[4903]: I0128 17:25:14.422659 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28bcef49-09f5-4d52-b6d5-022be9688809" path="/var/lib/kubelet/pods/28bcef49-09f5-4d52-b6d5-022be9688809/volumes" Jan 28 17:25:14 crc kubenswrapper[4903]: I0128 17:25:14.423853 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636f99d3-1a1b-4672-bb60-891b6af33c36" path="/var/lib/kubelet/pods/636f99d3-1a1b-4672-bb60-891b6af33c36/volumes" Jan 28 17:25:14 crc kubenswrapper[4903]: I0128 17:25:14.424448 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8499017e-3250-43f2-a8f9-d1f082b721d8" path="/var/lib/kubelet/pods/8499017e-3250-43f2-a8f9-d1f082b721d8/volumes" Jan 28 17:25:14 crc kubenswrapper[4903]: I0128 17:25:14.491417 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:14 crc kubenswrapper[4903]: W0128 17:25:14.502242 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27dc8868_b267_4d4a_8857_84127d0f09dc.slice/crio-0b03ff109b596349118c77b0336335bc2b90d1882ac71dfc27e1fb1a1dd98855 WatchSource:0}: Error finding container 0b03ff109b596349118c77b0336335bc2b90d1882ac71dfc27e1fb1a1dd98855: Status 404 returned error can't find the container with id 0b03ff109b596349118c77b0336335bc2b90d1882ac71dfc27e1fb1a1dd98855 Jan 28 17:25:14 crc kubenswrapper[4903]: I0128 17:25:14.583382 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 17:25:14 crc kubenswrapper[4903]: W0128 17:25:14.586871 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8fd66b0_cfd3_423c_9ba7_8a6a017c239e.slice/crio-14911dbe71de799cff33716dd6c3224cedae68fc8abae437e5f694edf53636af WatchSource:0}: Error finding container 14911dbe71de799cff33716dd6c3224cedae68fc8abae437e5f694edf53636af: Status 404 returned error can't find the container with id 14911dbe71de799cff33716dd6c3224cedae68fc8abae437e5f694edf53636af Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.430413 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27dc8868-b267-4d4a-8857-84127d0f09dc","Type":"ContainerStarted","Data":"71fc73ee29264dd8eeca7139026fd9d075d55afcfc01c074dbf1c0e44e8361c5"} Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.430754 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27dc8868-b267-4d4a-8857-84127d0f09dc","Type":"ContainerStarted","Data":"ef9375532bc364927ef9c1f1d94906a9d569b7344f926b8590eb81b461632e56"} Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.430765 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27dc8868-b267-4d4a-8857-84127d0f09dc","Type":"ContainerStarted","Data":"0b03ff109b596349118c77b0336335bc2b90d1882ac71dfc27e1fb1a1dd98855"} Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.433287 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e","Type":"ContainerStarted","Data":"3d09968fc0f58cdd94e16d0b629d255e836f2b598eefceb170d29cecdebe2569"} Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.433362 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e","Type":"ContainerStarted","Data":"909ff26c97bfd0b3061edc9b87e9f176717635cd4a9cf3a1bdd9edf777821d6f"} Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.433378 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e","Type":"ContainerStarted","Data":"14911dbe71de799cff33716dd6c3224cedae68fc8abae437e5f694edf53636af"} Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.471059 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.471036464 podStartE2EDuration="2.471036464s" podCreationTimestamp="2026-01-28 17:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:25:15.458660478 +0000 UTC m=+5987.734631989" watchObservedRunningTime="2026-01-28 17:25:15.471036464 +0000 UTC m=+5987.747007975" Jan 28 17:25:15 crc kubenswrapper[4903]: I0128 17:25:15.487496 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.48747361 podStartE2EDuration="2.48747361s" podCreationTimestamp="2026-01-28 17:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:25:15.480222613 +0000 UTC m=+5987.756194124" watchObservedRunningTime="2026-01-28 17:25:15.48747361 +0000 UTC m=+5987.763445131" Jan 28 17:25:17 crc kubenswrapper[4903]: I0128 17:25:17.368381 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 17:25:19 crc kubenswrapper[4903]: I0128 17:25:19.088397 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 17:25:19 crc kubenswrapper[4903]: I0128 17:25:19.088966 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 17:25:22 crc kubenswrapper[4903]: I0128 17:25:22.369208 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 17:25:22 crc kubenswrapper[4903]: I0128 17:25:22.407158 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 17:25:22 crc kubenswrapper[4903]: I0128 17:25:22.568219 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 17:25:24 crc kubenswrapper[4903]: I0128 17:25:24.030563 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 17:25:24 crc kubenswrapper[4903]: I0128 17:25:24.030633 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 17:25:24 crc kubenswrapper[4903]: I0128 17:25:24.087898 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 17:25:24 crc kubenswrapper[4903]: I0128 17:25:24.088049 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 17:25:25 crc kubenswrapper[4903]: I0128 17:25:25.113677 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.83:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:25:25 crc kubenswrapper[4903]: I0128 17:25:25.113677 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.83:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:25:25 crc kubenswrapper[4903]: I0128 17:25:25.129705 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.84:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 17:25:25 crc kubenswrapper[4903]: I0128 17:25:25.129882 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.84:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 17:25:26 crc kubenswrapper[4903]: I0128 17:25:26.415895 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:25:26 crc kubenswrapper[4903]: E0128 17:25:26.416240 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.034647 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.035206 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.035599 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.035626 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.040627 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.041153 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.094566 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.094937 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.099548 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.104395 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.257352 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54d795b979-fg72n"] Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.258812 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.293603 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d795b979-fg72n"] Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.374849 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-config\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.375014 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-dns-svc\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.375090 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-nb\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.375177 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-sb\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.375243 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2gkh\" (UniqueName: \"kubernetes.io/projected/68b6b21a-e766-4b5f-944f-ced63870b9c0-kube-api-access-s2gkh\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.478143 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-dns-svc\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.478202 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-nb\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.478245 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-sb\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.478272 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2gkh\" (UniqueName: \"kubernetes.io/projected/68b6b21a-e766-4b5f-944f-ced63870b9c0-kube-api-access-s2gkh\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.478355 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-config\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.480190 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-config\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.480244 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-dns-svc\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.480627 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-nb\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.480836 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-sb\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.514824 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2gkh\" (UniqueName: \"kubernetes.io/projected/68b6b21a-e766-4b5f-944f-ced63870b9c0-kube-api-access-s2gkh\") pod \"dnsmasq-dns-54d795b979-fg72n\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:34 crc kubenswrapper[4903]: I0128 17:25:34.585795 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:35 crc kubenswrapper[4903]: I0128 17:25:35.090114 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d795b979-fg72n"] Jan 28 17:25:35 crc kubenswrapper[4903]: I0128 17:25:35.617110 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d795b979-fg72n" event={"ID":"68b6b21a-e766-4b5f-944f-ced63870b9c0","Type":"ContainerStarted","Data":"684cf576c302cd66effade45a1073940d00ea204dfe3085a2be12f37d0964303"} Jan 28 17:25:35 crc kubenswrapper[4903]: I0128 17:25:35.617761 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d795b979-fg72n" event={"ID":"68b6b21a-e766-4b5f-944f-ced63870b9c0","Type":"ContainerStarted","Data":"c79c7bb8eefc6f2fce0e7275703cfa0928e2e98820647d40eb6c6eb2134e7666"} Jan 28 17:25:36 crc kubenswrapper[4903]: I0128 17:25:36.632258 4903 generic.go:334] "Generic (PLEG): container finished" podID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerID="684cf576c302cd66effade45a1073940d00ea204dfe3085a2be12f37d0964303" exitCode=0 Jan 28 17:25:36 crc kubenswrapper[4903]: I0128 17:25:36.632348 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d795b979-fg72n" event={"ID":"68b6b21a-e766-4b5f-944f-ced63870b9c0","Type":"ContainerDied","Data":"684cf576c302cd66effade45a1073940d00ea204dfe3085a2be12f37d0964303"} Jan 28 17:25:36 crc kubenswrapper[4903]: I0128 17:25:36.972660 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:36 crc kubenswrapper[4903]: I0128 17:25:36.973586 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-log" containerID="cri-o://ef9375532bc364927ef9c1f1d94906a9d569b7344f926b8590eb81b461632e56" gracePeriod=30 Jan 28 17:25:36 crc kubenswrapper[4903]: I0128 17:25:36.975095 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-api" containerID="cri-o://71fc73ee29264dd8eeca7139026fd9d075d55afcfc01c074dbf1c0e44e8361c5" gracePeriod=30 Jan 28 17:25:37 crc kubenswrapper[4903]: I0128 17:25:37.641982 4903 generic.go:334] "Generic (PLEG): container finished" podID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerID="ef9375532bc364927ef9c1f1d94906a9d569b7344f926b8590eb81b461632e56" exitCode=143 Jan 28 17:25:37 crc kubenswrapper[4903]: I0128 17:25:37.642046 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27dc8868-b267-4d4a-8857-84127d0f09dc","Type":"ContainerDied","Data":"ef9375532bc364927ef9c1f1d94906a9d569b7344f926b8590eb81b461632e56"} Jan 28 17:25:37 crc kubenswrapper[4903]: I0128 17:25:37.645500 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d795b979-fg72n" event={"ID":"68b6b21a-e766-4b5f-944f-ced63870b9c0","Type":"ContainerStarted","Data":"d3121b1cc57323ec427df8a1b59c0e9ad4871a2f2dbde128889149873684441c"} Jan 28 17:25:37 crc kubenswrapper[4903]: I0128 17:25:37.645669 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:37 crc kubenswrapper[4903]: I0128 17:25:37.663028 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54d795b979-fg72n" podStartSLOduration=3.663008722 podStartE2EDuration="3.663008722s" podCreationTimestamp="2026-01-28 17:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:25:37.660402992 +0000 UTC m=+6009.936374503" watchObservedRunningTime="2026-01-28 17:25:37.663008722 +0000 UTC m=+6009.938980234" Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.414381 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.677896 4903 generic.go:334] "Generic (PLEG): container finished" podID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerID="71fc73ee29264dd8eeca7139026fd9d075d55afcfc01c074dbf1c0e44e8361c5" exitCode=0 Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.677976 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27dc8868-b267-4d4a-8857-84127d0f09dc","Type":"ContainerDied","Data":"71fc73ee29264dd8eeca7139026fd9d075d55afcfc01c074dbf1c0e44e8361c5"} Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.678267 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"27dc8868-b267-4d4a-8857-84127d0f09dc","Type":"ContainerDied","Data":"0b03ff109b596349118c77b0336335bc2b90d1882ac71dfc27e1fb1a1dd98855"} Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.678284 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b03ff109b596349118c77b0336335bc2b90d1882ac71dfc27e1fb1a1dd98855" Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.744457 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.925815 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27dc8868-b267-4d4a-8857-84127d0f09dc-logs\") pod \"27dc8868-b267-4d4a-8857-84127d0f09dc\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.925989 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-config-data\") pod \"27dc8868-b267-4d4a-8857-84127d0f09dc\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.926082 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdqsc\" (UniqueName: \"kubernetes.io/projected/27dc8868-b267-4d4a-8857-84127d0f09dc-kube-api-access-tdqsc\") pod \"27dc8868-b267-4d4a-8857-84127d0f09dc\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.926133 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-combined-ca-bundle\") pod \"27dc8868-b267-4d4a-8857-84127d0f09dc\" (UID: \"27dc8868-b267-4d4a-8857-84127d0f09dc\") " Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.927389 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27dc8868-b267-4d4a-8857-84127d0f09dc-logs" (OuterVolumeSpecName: "logs") pod "27dc8868-b267-4d4a-8857-84127d0f09dc" (UID: "27dc8868-b267-4d4a-8857-84127d0f09dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.934972 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27dc8868-b267-4d4a-8857-84127d0f09dc-kube-api-access-tdqsc" (OuterVolumeSpecName: "kube-api-access-tdqsc") pod "27dc8868-b267-4d4a-8857-84127d0f09dc" (UID: "27dc8868-b267-4d4a-8857-84127d0f09dc"). InnerVolumeSpecName "kube-api-access-tdqsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.957007 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27dc8868-b267-4d4a-8857-84127d0f09dc" (UID: "27dc8868-b267-4d4a-8857-84127d0f09dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:41 crc kubenswrapper[4903]: I0128 17:25:41.957683 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-config-data" (OuterVolumeSpecName: "config-data") pod "27dc8868-b267-4d4a-8857-84127d0f09dc" (UID: "27dc8868-b267-4d4a-8857-84127d0f09dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.028581 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.028622 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27dc8868-b267-4d4a-8857-84127d0f09dc-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.028634 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27dc8868-b267-4d4a-8857-84127d0f09dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.028645 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdqsc\" (UniqueName: \"kubernetes.io/projected/27dc8868-b267-4d4a-8857-84127d0f09dc-kube-api-access-tdqsc\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.690201 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"ee28cc3262e4fea1138e33197444030f45138047131bb3fe3acbf3798be6fb9a"} Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.690221 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.741253 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.756835 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.777398 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:42 crc kubenswrapper[4903]: E0128 17:25:42.777918 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-api" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.777939 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-api" Jan 28 17:25:42 crc kubenswrapper[4903]: E0128 17:25:42.777970 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-log" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.777978 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-log" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.778187 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-api" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.778216 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" containerName="nova-api-log" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.787492 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.791371 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.791723 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.791871 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.827595 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.846009 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.846145 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-config-data\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.846216 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-logs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.846328 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7khq\" (UniqueName: \"kubernetes.io/projected/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-kube-api-access-k7khq\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.846412 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.846480 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.948302 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.948553 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.948611 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-config-data\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.948661 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-logs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.948744 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7khq\" (UniqueName: \"kubernetes.io/projected/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-kube-api-access-k7khq\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.948805 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.949751 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-logs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.955067 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-config-data\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.956976 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.964265 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.967365 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-public-tls-certs\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:42 crc kubenswrapper[4903]: I0128 17:25:42.968416 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7khq\" (UniqueName: \"kubernetes.io/projected/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-kube-api-access-k7khq\") pod \"nova-api-0\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " pod="openstack/nova-api-0" Jan 28 17:25:43 crc kubenswrapper[4903]: I0128 17:25:43.109798 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 17:25:43 crc kubenswrapper[4903]: I0128 17:25:43.562939 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 17:25:43 crc kubenswrapper[4903]: W0128 17:25:43.571193 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecee36c3_73e5_4e3b_8eb8_c29eae84dab5.slice/crio-adbe9eb04e81af29d5f433d20491f56e9618a6dfb1996b93478fae3afc0fb9e7 WatchSource:0}: Error finding container adbe9eb04e81af29d5f433d20491f56e9618a6dfb1996b93478fae3afc0fb9e7: Status 404 returned error can't find the container with id adbe9eb04e81af29d5f433d20491f56e9618a6dfb1996b93478fae3afc0fb9e7 Jan 28 17:25:43 crc kubenswrapper[4903]: I0128 17:25:43.700543 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5","Type":"ContainerStarted","Data":"adbe9eb04e81af29d5f433d20491f56e9618a6dfb1996b93478fae3afc0fb9e7"} Jan 28 17:25:44 crc kubenswrapper[4903]: I0128 17:25:44.426658 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27dc8868-b267-4d4a-8857-84127d0f09dc" path="/var/lib/kubelet/pods/27dc8868-b267-4d4a-8857-84127d0f09dc/volumes" Jan 28 17:25:44 crc kubenswrapper[4903]: I0128 17:25:44.588157 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:25:44 crc kubenswrapper[4903]: I0128 17:25:44.672700 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dfb8bbdc-wqzhl"] Jan 28 17:25:44 crc kubenswrapper[4903]: I0128 17:25:44.673061 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" podUID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerName="dnsmasq-dns" containerID="cri-o://02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3" gracePeriod=10 Jan 28 17:25:44 crc kubenswrapper[4903]: I0128 17:25:44.723981 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5","Type":"ContainerStarted","Data":"82fea43a61dc69dea5960abf4d7ddf92fde43e925d930d2f5160a1444485723d"} Jan 28 17:25:44 crc kubenswrapper[4903]: I0128 17:25:44.724059 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5","Type":"ContainerStarted","Data":"aba26193ab75a927a7f1998623cec92597d372c49578d8cc33d5e37ea6f0b0ce"} Jan 28 17:25:44 crc kubenswrapper[4903]: I0128 17:25:44.756648 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.75662064 podStartE2EDuration="2.75662064s" podCreationTimestamp="2026-01-28 17:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:25:44.744347127 +0000 UTC m=+6017.020318648" watchObservedRunningTime="2026-01-28 17:25:44.75662064 +0000 UTC m=+6017.032592171" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.386258 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.490983 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj4ng\" (UniqueName: \"kubernetes.io/projected/aa7692b0-11b6-4799-8cb5-36b15433a134-kube-api-access-bj4ng\") pod \"aa7692b0-11b6-4799-8cb5-36b15433a134\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.491072 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-nb\") pod \"aa7692b0-11b6-4799-8cb5-36b15433a134\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.491176 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-dns-svc\") pod \"aa7692b0-11b6-4799-8cb5-36b15433a134\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.491218 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-sb\") pod \"aa7692b0-11b6-4799-8cb5-36b15433a134\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.491413 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-config\") pod \"aa7692b0-11b6-4799-8cb5-36b15433a134\" (UID: \"aa7692b0-11b6-4799-8cb5-36b15433a134\") " Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.496680 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7692b0-11b6-4799-8cb5-36b15433a134-kube-api-access-bj4ng" (OuterVolumeSpecName: "kube-api-access-bj4ng") pod "aa7692b0-11b6-4799-8cb5-36b15433a134" (UID: "aa7692b0-11b6-4799-8cb5-36b15433a134"). InnerVolumeSpecName "kube-api-access-bj4ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.555491 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa7692b0-11b6-4799-8cb5-36b15433a134" (UID: "aa7692b0-11b6-4799-8cb5-36b15433a134"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.560436 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-config" (OuterVolumeSpecName: "config") pod "aa7692b0-11b6-4799-8cb5-36b15433a134" (UID: "aa7692b0-11b6-4799-8cb5-36b15433a134"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.562897 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa7692b0-11b6-4799-8cb5-36b15433a134" (UID: "aa7692b0-11b6-4799-8cb5-36b15433a134"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.586412 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa7692b0-11b6-4799-8cb5-36b15433a134" (UID: "aa7692b0-11b6-4799-8cb5-36b15433a134"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.593813 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.593851 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj4ng\" (UniqueName: \"kubernetes.io/projected/aa7692b0-11b6-4799-8cb5-36b15433a134-kube-api-access-bj4ng\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.593865 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.593876 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.593885 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa7692b0-11b6-4799-8cb5-36b15433a134-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.732670 4903 generic.go:334] "Generic (PLEG): container finished" podID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerID="02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3" exitCode=0 Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.732721 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.732765 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" event={"ID":"aa7692b0-11b6-4799-8cb5-36b15433a134","Type":"ContainerDied","Data":"02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3"} Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.732811 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59dfb8bbdc-wqzhl" event={"ID":"aa7692b0-11b6-4799-8cb5-36b15433a134","Type":"ContainerDied","Data":"fe305b5b7d501391af9c401f7aecc181d7010e2c39fb2e8a3bc26dd361c25379"} Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.732836 4903 scope.go:117] "RemoveContainer" containerID="02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.758244 4903 scope.go:117] "RemoveContainer" containerID="088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.766640 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59dfb8bbdc-wqzhl"] Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.780659 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59dfb8bbdc-wqzhl"] Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.794594 4903 scope.go:117] "RemoveContainer" containerID="02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3" Jan 28 17:25:45 crc kubenswrapper[4903]: E0128 17:25:45.794996 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3\": container with ID starting with 02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3 not found: ID does not exist" containerID="02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.795034 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3"} err="failed to get container status \"02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3\": rpc error: code = NotFound desc = could not find container \"02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3\": container with ID starting with 02f4596a099b4755b5946b37c87cea60364f6e04911f4740132113ba1131d9e3 not found: ID does not exist" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.795059 4903 scope.go:117] "RemoveContainer" containerID="088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0" Jan 28 17:25:45 crc kubenswrapper[4903]: E0128 17:25:45.795343 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0\": container with ID starting with 088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0 not found: ID does not exist" containerID="088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0" Jan 28 17:25:45 crc kubenswrapper[4903]: I0128 17:25:45.795394 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0"} err="failed to get container status \"088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0\": rpc error: code = NotFound desc = could not find container \"088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0\": container with ID starting with 088f90e0a6b4c43ec8e5555af008be9f60153613a9b00c8a643ca9f82d342bd0 not found: ID does not exist" Jan 28 17:25:46 crc kubenswrapper[4903]: I0128 17:25:46.424642 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7692b0-11b6-4799-8cb5-36b15433a134" path="/var/lib/kubelet/pods/aa7692b0-11b6-4799-8cb5-36b15433a134/volumes" Jan 28 17:25:53 crc kubenswrapper[4903]: I0128 17:25:53.110292 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 17:25:53 crc kubenswrapper[4903]: I0128 17:25:53.110920 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 17:25:54 crc kubenswrapper[4903]: I0128 17:25:54.121687 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.86:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 17:25:54 crc kubenswrapper[4903]: I0128 17:25:54.121728 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.86:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 17:25:54 crc kubenswrapper[4903]: I0128 17:25:54.727880 4903 scope.go:117] "RemoveContainer" containerID="17d115ab2775241dd2074cb918029507d33eb101cc37e3982d882f28c3db6017" Jan 28 17:25:54 crc kubenswrapper[4903]: I0128 17:25:54.751618 4903 scope.go:117] "RemoveContainer" containerID="5d6d8122efb4a39789583a27661cae3e668dec6b4b9b03b2cf966d81b6e5bc9a" Jan 28 17:25:57 crc kubenswrapper[4903]: I0128 17:25:57.042082 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f4a1-account-create-update-4rsdx"] Jan 28 17:25:57 crc kubenswrapper[4903]: I0128 17:25:57.053583 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kq2xd"] Jan 28 17:25:57 crc kubenswrapper[4903]: I0128 17:25:57.082674 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kq2xd"] Jan 28 17:25:57 crc kubenswrapper[4903]: I0128 17:25:57.091552 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f4a1-account-create-update-4rsdx"] Jan 28 17:25:58 crc kubenswrapper[4903]: I0128 17:25:58.424011 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08a9abc-0511-41a5-8409-f1b5411ddff0" path="/var/lib/kubelet/pods/b08a9abc-0511-41a5-8409-f1b5411ddff0/volumes" Jan 28 17:25:58 crc kubenswrapper[4903]: I0128 17:25:58.425420 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb83cac8-698a-483b-9643-1f6f37fdd873" path="/var/lib/kubelet/pods/eb83cac8-698a-483b-9643-1f6f37fdd873/volumes" Jan 28 17:26:03 crc kubenswrapper[4903]: I0128 17:26:03.117264 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 17:26:03 crc kubenswrapper[4903]: I0128 17:26:03.118304 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 17:26:03 crc kubenswrapper[4903]: I0128 17:26:03.119314 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 17:26:03 crc kubenswrapper[4903]: I0128 17:26:03.135060 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 17:26:03 crc kubenswrapper[4903]: I0128 17:26:03.892672 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 17:26:03 crc kubenswrapper[4903]: I0128 17:26:03.898656 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 17:26:07 crc kubenswrapper[4903]: I0128 17:26:07.047073 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pr2c7"] Jan 28 17:26:07 crc kubenswrapper[4903]: I0128 17:26:07.066331 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pr2c7"] Jan 28 17:26:08 crc kubenswrapper[4903]: I0128 17:26:08.432469 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523bbae2-5948-4985-978f-4c728efb853d" path="/var/lib/kubelet/pods/523bbae2-5948-4985-978f-4c728efb853d/volumes" Jan 28 17:26:21 crc kubenswrapper[4903]: I0128 17:26:21.041799 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xz4k9"] Jan 28 17:26:21 crc kubenswrapper[4903]: I0128 17:26:21.051551 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xz4k9"] Jan 28 17:26:22 crc kubenswrapper[4903]: I0128 17:26:22.423445 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6925b860-6acd-41e5-a575-5a3d6cb9bb64" path="/var/lib/kubelet/pods/6925b860-6acd-41e5-a575-5a3d6cb9bb64/volumes" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.130441 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-krf4w"] Jan 28 17:26:25 crc kubenswrapper[4903]: E0128 17:26:25.131130 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerName="init" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.131143 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerName="init" Jan 28 17:26:25 crc kubenswrapper[4903]: E0128 17:26:25.131175 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerName="dnsmasq-dns" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.131181 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerName="dnsmasq-dns" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.131353 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7692b0-11b6-4799-8cb5-36b15433a134" containerName="dnsmasq-dns" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.131975 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.137220 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.137260 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7gzkk" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.137730 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.151124 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-krf4w"] Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.178698 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6w9c9"] Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.181037 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.249461 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6w9c9"] Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297654 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-run-ovn\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297716 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2182df2f-8691-434f-990e-67e58ba8dd45-scripts\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297736 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-etc-ovs\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297772 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-run\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297799 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xjx9\" (UniqueName: \"kubernetes.io/projected/c72df41e-a2b4-481c-b723-9cf50af98f8e-kube-api-access-4xjx9\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297819 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c72df41e-a2b4-481c-b723-9cf50af98f8e-scripts\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297847 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-log\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297880 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c72df41e-a2b4-481c-b723-9cf50af98f8e-ovn-controller-tls-certs\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297903 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvp6n\" (UniqueName: \"kubernetes.io/projected/2182df2f-8691-434f-990e-67e58ba8dd45-kube-api-access-pvp6n\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297951 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-run\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297969 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72df41e-a2b4-481c-b723-9cf50af98f8e-combined-ca-bundle\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.297987 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-lib\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.298006 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-log-ovn\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399153 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-log\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399542 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-log\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399564 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c72df41e-a2b4-481c-b723-9cf50af98f8e-ovn-controller-tls-certs\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399628 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvp6n\" (UniqueName: \"kubernetes.io/projected/2182df2f-8691-434f-990e-67e58ba8dd45-kube-api-access-pvp6n\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399719 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-run\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399757 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72df41e-a2b4-481c-b723-9cf50af98f8e-combined-ca-bundle\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399783 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-lib\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399813 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-log-ovn\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399877 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-run-ovn\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399883 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-run\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399917 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2182df2f-8691-434f-990e-67e58ba8dd45-scripts\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.399989 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-etc-ovs\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400032 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-var-lib\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400095 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-run\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400147 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjx9\" (UniqueName: \"kubernetes.io/projected/c72df41e-a2b4-481c-b723-9cf50af98f8e-kube-api-access-4xjx9\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400178 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c72df41e-a2b4-481c-b723-9cf50af98f8e-scripts\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400635 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-log-ovn\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400763 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-run-ovn\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400860 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c72df41e-a2b4-481c-b723-9cf50af98f8e-var-run\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.400965 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2182df2f-8691-434f-990e-67e58ba8dd45-etc-ovs\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.402258 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c72df41e-a2b4-481c-b723-9cf50af98f8e-scripts\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.402391 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2182df2f-8691-434f-990e-67e58ba8dd45-scripts\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.406910 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c72df41e-a2b4-481c-b723-9cf50af98f8e-ovn-controller-tls-certs\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.413281 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c72df41e-a2b4-481c-b723-9cf50af98f8e-combined-ca-bundle\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.417731 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvp6n\" (UniqueName: \"kubernetes.io/projected/2182df2f-8691-434f-990e-67e58ba8dd45-kube-api-access-pvp6n\") pod \"ovn-controller-ovs-6w9c9\" (UID: \"2182df2f-8691-434f-990e-67e58ba8dd45\") " pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.422003 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjx9\" (UniqueName: \"kubernetes.io/projected/c72df41e-a2b4-481c-b723-9cf50af98f8e-kube-api-access-4xjx9\") pod \"ovn-controller-krf4w\" (UID: \"c72df41e-a2b4-481c-b723-9cf50af98f8e\") " pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.460880 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.504912 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:25 crc kubenswrapper[4903]: I0128 17:26:25.946161 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-krf4w"] Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.134492 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w" event={"ID":"c72df41e-a2b4-481c-b723-9cf50af98f8e","Type":"ContainerStarted","Data":"49dd176693ad049338fe2a393996b74487d26fbc9c4af3ae61ae68a8186d12f4"} Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.370760 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6w9c9"] Jan 28 17:26:26 crc kubenswrapper[4903]: W0128 17:26:26.373840 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2182df2f_8691_434f_990e_67e58ba8dd45.slice/crio-af3cef84d97c8d14d3d86898abd39fc2cb939107dbd7fabd8b037b4345566734 WatchSource:0}: Error finding container af3cef84d97c8d14d3d86898abd39fc2cb939107dbd7fabd8b037b4345566734: Status 404 returned error can't find the container with id af3cef84d97c8d14d3d86898abd39fc2cb939107dbd7fabd8b037b4345566734 Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.580884 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-n7mn8"] Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.582713 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.590011 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.598506 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n7mn8"] Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.723520 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-ovs-rundir\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.725666 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-ovn-rundir\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.725773 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-config\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.725876 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.726005 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-combined-ca-bundle\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.726091 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64mg\" (UniqueName: \"kubernetes.io/projected/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-kube-api-access-t64mg\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.829058 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-ovn-rundir\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.829190 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-config\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.829266 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.829309 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-ovn-rundir\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.829413 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-combined-ca-bundle\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.829478 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64mg\" (UniqueName: \"kubernetes.io/projected/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-kube-api-access-t64mg\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.829561 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-ovs-rundir\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.830131 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-config\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.830221 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-ovs-rundir\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.836369 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.847636 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-combined-ca-bundle\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.847651 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64mg\" (UniqueName: \"kubernetes.io/projected/5aeb6324-f2b7-463e-9bf6-587a6fecc51a-kube-api-access-t64mg\") pod \"ovn-controller-metrics-n7mn8\" (UID: \"5aeb6324-f2b7-463e-9bf6-587a6fecc51a\") " pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:26 crc kubenswrapper[4903]: I0128 17:26:26.933285 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n7mn8" Jan 28 17:26:27 crc kubenswrapper[4903]: I0128 17:26:27.145688 4903 generic.go:334] "Generic (PLEG): container finished" podID="2182df2f-8691-434f-990e-67e58ba8dd45" containerID="4be9411d0d73ff746b254673cc5b7b6072d9ec28d3ca71c92736dc648008ddb0" exitCode=0 Jan 28 17:26:27 crc kubenswrapper[4903]: I0128 17:26:27.145826 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6w9c9" event={"ID":"2182df2f-8691-434f-990e-67e58ba8dd45","Type":"ContainerDied","Data":"4be9411d0d73ff746b254673cc5b7b6072d9ec28d3ca71c92736dc648008ddb0"} Jan 28 17:26:27 crc kubenswrapper[4903]: I0128 17:26:27.146109 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6w9c9" event={"ID":"2182df2f-8691-434f-990e-67e58ba8dd45","Type":"ContainerStarted","Data":"af3cef84d97c8d14d3d86898abd39fc2cb939107dbd7fabd8b037b4345566734"} Jan 28 17:26:27 crc kubenswrapper[4903]: I0128 17:26:27.148813 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w" event={"ID":"c72df41e-a2b4-481c-b723-9cf50af98f8e","Type":"ContainerStarted","Data":"932466b786656f15dd780f13a2cea3b6438006d9984cf47a2aadb48e16fa2f5a"} Jan 28 17:26:27 crc kubenswrapper[4903]: I0128 17:26:27.149009 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-krf4w" Jan 28 17:26:27 crc kubenswrapper[4903]: I0128 17:26:27.193925 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-krf4w" podStartSLOduration=2.193907654 podStartE2EDuration="2.193907654s" podCreationTimestamp="2026-01-28 17:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:26:27.191947521 +0000 UTC m=+6059.467919032" watchObservedRunningTime="2026-01-28 17:26:27.193907654 +0000 UTC m=+6059.469879165" Jan 28 17:26:27 crc kubenswrapper[4903]: I0128 17:26:27.377383 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n7mn8"] Jan 28 17:26:27 crc kubenswrapper[4903]: W0128 17:26:27.386620 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aeb6324_f2b7_463e_9bf6_587a6fecc51a.slice/crio-6b755009c59e75200e82adad9a261ed4dd41c7bdad22c36668a9cef0c0bbb4d5 WatchSource:0}: Error finding container 6b755009c59e75200e82adad9a261ed4dd41c7bdad22c36668a9cef0c0bbb4d5: Status 404 returned error can't find the container with id 6b755009c59e75200e82adad9a261ed4dd41c7bdad22c36668a9cef0c0bbb4d5 Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.159933 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6w9c9" event={"ID":"2182df2f-8691-434f-990e-67e58ba8dd45","Type":"ContainerStarted","Data":"b1ecbc9cf2570196c3cd57201512593b1e0fa9203d8dba63afd2911d8955f3b9"} Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.160338 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.160364 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.160378 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6w9c9" event={"ID":"2182df2f-8691-434f-990e-67e58ba8dd45","Type":"ContainerStarted","Data":"bdb2f985e5fd3e948708e390954d6aa159f0b74c1206460a83b9235f213b03f9"} Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.163443 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n7mn8" event={"ID":"5aeb6324-f2b7-463e-9bf6-587a6fecc51a","Type":"ContainerStarted","Data":"4aea5fedceec11f7f942228a007f3e8f0c5aac96ba2a1cef1dae02ca0cb8fb0b"} Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.163493 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n7mn8" event={"ID":"5aeb6324-f2b7-463e-9bf6-587a6fecc51a","Type":"ContainerStarted","Data":"6b755009c59e75200e82adad9a261ed4dd41c7bdad22c36668a9cef0c0bbb4d5"} Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.226451 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6w9c9" podStartSLOduration=3.226431512 podStartE2EDuration="3.226431512s" podCreationTimestamp="2026-01-28 17:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:26:28.206161782 +0000 UTC m=+6060.482133293" watchObservedRunningTime="2026-01-28 17:26:28.226431512 +0000 UTC m=+6060.502403023" Jan 28 17:26:28 crc kubenswrapper[4903]: I0128 17:26:28.256074 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-n7mn8" podStartSLOduration=2.256054655 podStartE2EDuration="2.256054655s" podCreationTimestamp="2026-01-28 17:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:26:28.249424235 +0000 UTC m=+6060.525395746" watchObservedRunningTime="2026-01-28 17:26:28.256054655 +0000 UTC m=+6060.532026166" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.432643 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-vch6z"] Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.434788 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.442667 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-vch6z"] Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.586913 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrt4r\" (UniqueName: \"kubernetes.io/projected/86350aa2-f96f-4ef9-9972-59ceda005637-kube-api-access-wrt4r\") pod \"octavia-db-create-vch6z\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.587423 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86350aa2-f96f-4ef9-9972-59ceda005637-operator-scripts\") pod \"octavia-db-create-vch6z\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.689587 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86350aa2-f96f-4ef9-9972-59ceda005637-operator-scripts\") pod \"octavia-db-create-vch6z\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.689891 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrt4r\" (UniqueName: \"kubernetes.io/projected/86350aa2-f96f-4ef9-9972-59ceda005637-kube-api-access-wrt4r\") pod \"octavia-db-create-vch6z\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.690612 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86350aa2-f96f-4ef9-9972-59ceda005637-operator-scripts\") pod \"octavia-db-create-vch6z\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.710860 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrt4r\" (UniqueName: \"kubernetes.io/projected/86350aa2-f96f-4ef9-9972-59ceda005637-kube-api-access-wrt4r\") pod \"octavia-db-create-vch6z\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:42 crc kubenswrapper[4903]: I0128 17:26:42.755704 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.193866 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-3c95-account-create-update-wwn2s"] Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.195679 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.198106 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.206358 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-3c95-account-create-update-wwn2s"] Jan 28 17:26:43 crc kubenswrapper[4903]: W0128 17:26:43.250494 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86350aa2_f96f_4ef9_9972_59ceda005637.slice/crio-c16c697ebc8d5206567d7a9bb8f62001d30dbd98bc405db0ff99d84f47ca0e7f WatchSource:0}: Error finding container c16c697ebc8d5206567d7a9bb8f62001d30dbd98bc405db0ff99d84f47ca0e7f: Status 404 returned error can't find the container with id c16c697ebc8d5206567d7a9bb8f62001d30dbd98bc405db0ff99d84f47ca0e7f Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.254283 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-vch6z"] Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.300766 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/895bdd55-2240-428e-9fad-4449bb7cbe36-operator-scripts\") pod \"octavia-3c95-account-create-update-wwn2s\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.300873 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwmhm\" (UniqueName: \"kubernetes.io/projected/895bdd55-2240-428e-9fad-4449bb7cbe36-kube-api-access-gwmhm\") pod \"octavia-3c95-account-create-update-wwn2s\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.403185 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwmhm\" (UniqueName: \"kubernetes.io/projected/895bdd55-2240-428e-9fad-4449bb7cbe36-kube-api-access-gwmhm\") pod \"octavia-3c95-account-create-update-wwn2s\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.403396 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/895bdd55-2240-428e-9fad-4449bb7cbe36-operator-scripts\") pod \"octavia-3c95-account-create-update-wwn2s\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.404151 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/895bdd55-2240-428e-9fad-4449bb7cbe36-operator-scripts\") pod \"octavia-3c95-account-create-update-wwn2s\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.416140 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-vch6z" event={"ID":"86350aa2-f96f-4ef9-9972-59ceda005637","Type":"ContainerStarted","Data":"c16c697ebc8d5206567d7a9bb8f62001d30dbd98bc405db0ff99d84f47ca0e7f"} Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.428124 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwmhm\" (UniqueName: \"kubernetes.io/projected/895bdd55-2240-428e-9fad-4449bb7cbe36-kube-api-access-gwmhm\") pod \"octavia-3c95-account-create-update-wwn2s\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.521109 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:43 crc kubenswrapper[4903]: I0128 17:26:43.972568 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-3c95-account-create-update-wwn2s"] Jan 28 17:26:44 crc kubenswrapper[4903]: I0128 17:26:44.426655 4903 generic.go:334] "Generic (PLEG): container finished" podID="895bdd55-2240-428e-9fad-4449bb7cbe36" containerID="071c625799897330b9ecdbf334d7c3678096acc450661f473d270bdb460d3b99" exitCode=0 Jan 28 17:26:44 crc kubenswrapper[4903]: I0128 17:26:44.426734 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-3c95-account-create-update-wwn2s" event={"ID":"895bdd55-2240-428e-9fad-4449bb7cbe36","Type":"ContainerDied","Data":"071c625799897330b9ecdbf334d7c3678096acc450661f473d270bdb460d3b99"} Jan 28 17:26:44 crc kubenswrapper[4903]: I0128 17:26:44.427314 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-3c95-account-create-update-wwn2s" event={"ID":"895bdd55-2240-428e-9fad-4449bb7cbe36","Type":"ContainerStarted","Data":"7348ec4bead4a51226ebc8c3aa0071a1f7202a6906e863f52c00932bac07189f"} Jan 28 17:26:44 crc kubenswrapper[4903]: I0128 17:26:44.428856 4903 generic.go:334] "Generic (PLEG): container finished" podID="86350aa2-f96f-4ef9-9972-59ceda005637" containerID="902f80c47ecd4b91ae9f13cc504de58fcd5b0b801cf945e4057e244115f17105" exitCode=0 Jan 28 17:26:44 crc kubenswrapper[4903]: I0128 17:26:44.428905 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-vch6z" event={"ID":"86350aa2-f96f-4ef9-9972-59ceda005637","Type":"ContainerDied","Data":"902f80c47ecd4b91ae9f13cc504de58fcd5b0b801cf945e4057e244115f17105"} Jan 28 17:26:45 crc kubenswrapper[4903]: I0128 17:26:45.887883 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:45 crc kubenswrapper[4903]: I0128 17:26:45.895572 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.058171 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86350aa2-f96f-4ef9-9972-59ceda005637-operator-scripts\") pod \"86350aa2-f96f-4ef9-9972-59ceda005637\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.058253 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrt4r\" (UniqueName: \"kubernetes.io/projected/86350aa2-f96f-4ef9-9972-59ceda005637-kube-api-access-wrt4r\") pod \"86350aa2-f96f-4ef9-9972-59ceda005637\" (UID: \"86350aa2-f96f-4ef9-9972-59ceda005637\") " Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.058349 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwmhm\" (UniqueName: \"kubernetes.io/projected/895bdd55-2240-428e-9fad-4449bb7cbe36-kube-api-access-gwmhm\") pod \"895bdd55-2240-428e-9fad-4449bb7cbe36\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.058403 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/895bdd55-2240-428e-9fad-4449bb7cbe36-operator-scripts\") pod \"895bdd55-2240-428e-9fad-4449bb7cbe36\" (UID: \"895bdd55-2240-428e-9fad-4449bb7cbe36\") " Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.058827 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86350aa2-f96f-4ef9-9972-59ceda005637-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86350aa2-f96f-4ef9-9972-59ceda005637" (UID: "86350aa2-f96f-4ef9-9972-59ceda005637"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.059390 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86350aa2-f96f-4ef9-9972-59ceda005637-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.060221 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/895bdd55-2240-428e-9fad-4449bb7cbe36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "895bdd55-2240-428e-9fad-4449bb7cbe36" (UID: "895bdd55-2240-428e-9fad-4449bb7cbe36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.066235 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86350aa2-f96f-4ef9-9972-59ceda005637-kube-api-access-wrt4r" (OuterVolumeSpecName: "kube-api-access-wrt4r") pod "86350aa2-f96f-4ef9-9972-59ceda005637" (UID: "86350aa2-f96f-4ef9-9972-59ceda005637"). InnerVolumeSpecName "kube-api-access-wrt4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.066322 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895bdd55-2240-428e-9fad-4449bb7cbe36-kube-api-access-gwmhm" (OuterVolumeSpecName: "kube-api-access-gwmhm") pod "895bdd55-2240-428e-9fad-4449bb7cbe36" (UID: "895bdd55-2240-428e-9fad-4449bb7cbe36"). InnerVolumeSpecName "kube-api-access-gwmhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.161249 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwmhm\" (UniqueName: \"kubernetes.io/projected/895bdd55-2240-428e-9fad-4449bb7cbe36-kube-api-access-gwmhm\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.161607 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/895bdd55-2240-428e-9fad-4449bb7cbe36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.161620 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrt4r\" (UniqueName: \"kubernetes.io/projected/86350aa2-f96f-4ef9-9972-59ceda005637-kube-api-access-wrt4r\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.456444 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-3c95-account-create-update-wwn2s" event={"ID":"895bdd55-2240-428e-9fad-4449bb7cbe36","Type":"ContainerDied","Data":"7348ec4bead4a51226ebc8c3aa0071a1f7202a6906e863f52c00932bac07189f"} Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.456492 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7348ec4bead4a51226ebc8c3aa0071a1f7202a6906e863f52c00932bac07189f" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.456500 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-3c95-account-create-update-wwn2s" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.458212 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-vch6z" event={"ID":"86350aa2-f96f-4ef9-9972-59ceda005637","Type":"ContainerDied","Data":"c16c697ebc8d5206567d7a9bb8f62001d30dbd98bc405db0ff99d84f47ca0e7f"} Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.458242 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16c697ebc8d5206567d7a9bb8f62001d30dbd98bc405db0ff99d84f47ca0e7f" Jan 28 17:26:46 crc kubenswrapper[4903]: I0128 17:26:46.458294 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vch6z" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.439514 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-wp9mh"] Jan 28 17:26:48 crc kubenswrapper[4903]: E0128 17:26:48.440305 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86350aa2-f96f-4ef9-9972-59ceda005637" containerName="mariadb-database-create" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.440322 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="86350aa2-f96f-4ef9-9972-59ceda005637" containerName="mariadb-database-create" Jan 28 17:26:48 crc kubenswrapper[4903]: E0128 17:26:48.440336 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895bdd55-2240-428e-9fad-4449bb7cbe36" containerName="mariadb-account-create-update" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.440343 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="895bdd55-2240-428e-9fad-4449bb7cbe36" containerName="mariadb-account-create-update" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.440573 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="86350aa2-f96f-4ef9-9972-59ceda005637" containerName="mariadb-database-create" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.440613 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="895bdd55-2240-428e-9fad-4449bb7cbe36" containerName="mariadb-account-create-update" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.441309 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.482422 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-wp9mh"] Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.606103 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/141f08f5-50f7-429e-bc31-888f86f1a477-operator-scripts\") pod \"octavia-persistence-db-create-wp9mh\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.606386 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmtq7\" (UniqueName: \"kubernetes.io/projected/141f08f5-50f7-429e-bc31-888f86f1a477-kube-api-access-wmtq7\") pod \"octavia-persistence-db-create-wp9mh\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.708473 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/141f08f5-50f7-429e-bc31-888f86f1a477-operator-scripts\") pod \"octavia-persistence-db-create-wp9mh\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.708869 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmtq7\" (UniqueName: \"kubernetes.io/projected/141f08f5-50f7-429e-bc31-888f86f1a477-kube-api-access-wmtq7\") pod \"octavia-persistence-db-create-wp9mh\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.709165 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/141f08f5-50f7-429e-bc31-888f86f1a477-operator-scripts\") pod \"octavia-persistence-db-create-wp9mh\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.724598 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmtq7\" (UniqueName: \"kubernetes.io/projected/141f08f5-50f7-429e-bc31-888f86f1a477-kube-api-access-wmtq7\") pod \"octavia-persistence-db-create-wp9mh\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:48 crc kubenswrapper[4903]: I0128 17:26:48.801007 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:49 crc kubenswrapper[4903]: W0128 17:26:49.258183 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod141f08f5_50f7_429e_bc31_888f86f1a477.slice/crio-3f9af11a917610470ddaff10a2946a0fb6a7dc5ca504f01845c08182dbbbdf24 WatchSource:0}: Error finding container 3f9af11a917610470ddaff10a2946a0fb6a7dc5ca504f01845c08182dbbbdf24: Status 404 returned error can't find the container with id 3f9af11a917610470ddaff10a2946a0fb6a7dc5ca504f01845c08182dbbbdf24 Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.259392 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-wp9mh"] Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.492787 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-wp9mh" event={"ID":"141f08f5-50f7-429e-bc31-888f86f1a477","Type":"ContainerStarted","Data":"553255b762b13ecc48a3dcd82a8c6eed3a34ca56ff26142e2c5d40f87d8baea5"} Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.493098 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-wp9mh" event={"ID":"141f08f5-50f7-429e-bc31-888f86f1a477","Type":"ContainerStarted","Data":"3f9af11a917610470ddaff10a2946a0fb6a7dc5ca504f01845c08182dbbbdf24"} Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.494112 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-2943-account-create-update-8fdqj"] Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.495157 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.500774 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.507686 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-2943-account-create-update-8fdqj"] Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.521946 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-persistence-db-create-wp9mh" podStartSLOduration=1.521919821 podStartE2EDuration="1.521919821s" podCreationTimestamp="2026-01-28 17:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:26:49.513970886 +0000 UTC m=+6081.789942417" watchObservedRunningTime="2026-01-28 17:26:49.521919821 +0000 UTC m=+6081.797891342" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.625730 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqpl5\" (UniqueName: \"kubernetes.io/projected/446f97b9-ee08-4b89-8fe6-e17021aaa142-kube-api-access-zqpl5\") pod \"octavia-2943-account-create-update-8fdqj\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.625821 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/446f97b9-ee08-4b89-8fe6-e17021aaa142-operator-scripts\") pod \"octavia-2943-account-create-update-8fdqj\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.728200 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/446f97b9-ee08-4b89-8fe6-e17021aaa142-operator-scripts\") pod \"octavia-2943-account-create-update-8fdqj\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.729005 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqpl5\" (UniqueName: \"kubernetes.io/projected/446f97b9-ee08-4b89-8fe6-e17021aaa142-kube-api-access-zqpl5\") pod \"octavia-2943-account-create-update-8fdqj\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.729219 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/446f97b9-ee08-4b89-8fe6-e17021aaa142-operator-scripts\") pod \"octavia-2943-account-create-update-8fdqj\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.751981 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqpl5\" (UniqueName: \"kubernetes.io/projected/446f97b9-ee08-4b89-8fe6-e17021aaa142-kube-api-access-zqpl5\") pod \"octavia-2943-account-create-update-8fdqj\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:49 crc kubenswrapper[4903]: I0128 17:26:49.816502 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:50 crc kubenswrapper[4903]: W0128 17:26:50.258835 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod446f97b9_ee08_4b89_8fe6_e17021aaa142.slice/crio-7439a38a4253fab13ed0bbc88d5fd3eb0e91b6af04eb15f0b17f2aee3968b1c2 WatchSource:0}: Error finding container 7439a38a4253fab13ed0bbc88d5fd3eb0e91b6af04eb15f0b17f2aee3968b1c2: Status 404 returned error can't find the container with id 7439a38a4253fab13ed0bbc88d5fd3eb0e91b6af04eb15f0b17f2aee3968b1c2 Jan 28 17:26:50 crc kubenswrapper[4903]: I0128 17:26:50.264301 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-2943-account-create-update-8fdqj"] Jan 28 17:26:50 crc kubenswrapper[4903]: I0128 17:26:50.503081 4903 generic.go:334] "Generic (PLEG): container finished" podID="141f08f5-50f7-429e-bc31-888f86f1a477" containerID="553255b762b13ecc48a3dcd82a8c6eed3a34ca56ff26142e2c5d40f87d8baea5" exitCode=0 Jan 28 17:26:50 crc kubenswrapper[4903]: I0128 17:26:50.503180 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-wp9mh" event={"ID":"141f08f5-50f7-429e-bc31-888f86f1a477","Type":"ContainerDied","Data":"553255b762b13ecc48a3dcd82a8c6eed3a34ca56ff26142e2c5d40f87d8baea5"} Jan 28 17:26:50 crc kubenswrapper[4903]: I0128 17:26:50.505582 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-2943-account-create-update-8fdqj" event={"ID":"446f97b9-ee08-4b89-8fe6-e17021aaa142","Type":"ContainerStarted","Data":"bc604d076fa8fd376e7afdd81c370e9d350ea00e554e1ea7bcebf24eaba28cb8"} Jan 28 17:26:50 crc kubenswrapper[4903]: I0128 17:26:50.505629 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-2943-account-create-update-8fdqj" event={"ID":"446f97b9-ee08-4b89-8fe6-e17021aaa142","Type":"ContainerStarted","Data":"7439a38a4253fab13ed0bbc88d5fd3eb0e91b6af04eb15f0b17f2aee3968b1c2"} Jan 28 17:26:50 crc kubenswrapper[4903]: I0128 17:26:50.537469 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-2943-account-create-update-8fdqj" podStartSLOduration=1.537363015 podStartE2EDuration="1.537363015s" podCreationTimestamp="2026-01-28 17:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:26:50.535797583 +0000 UTC m=+6082.811769094" watchObservedRunningTime="2026-01-28 17:26:50.537363015 +0000 UTC m=+6082.813334526" Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.515441 4903 generic.go:334] "Generic (PLEG): container finished" podID="446f97b9-ee08-4b89-8fe6-e17021aaa142" containerID="bc604d076fa8fd376e7afdd81c370e9d350ea00e554e1ea7bcebf24eaba28cb8" exitCode=0 Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.515739 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-2943-account-create-update-8fdqj" event={"ID":"446f97b9-ee08-4b89-8fe6-e17021aaa142","Type":"ContainerDied","Data":"bc604d076fa8fd376e7afdd81c370e9d350ea00e554e1ea7bcebf24eaba28cb8"} Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.827473 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.885401 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/141f08f5-50f7-429e-bc31-888f86f1a477-operator-scripts\") pod \"141f08f5-50f7-429e-bc31-888f86f1a477\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.885759 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmtq7\" (UniqueName: \"kubernetes.io/projected/141f08f5-50f7-429e-bc31-888f86f1a477-kube-api-access-wmtq7\") pod \"141f08f5-50f7-429e-bc31-888f86f1a477\" (UID: \"141f08f5-50f7-429e-bc31-888f86f1a477\") " Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.886058 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/141f08f5-50f7-429e-bc31-888f86f1a477-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "141f08f5-50f7-429e-bc31-888f86f1a477" (UID: "141f08f5-50f7-429e-bc31-888f86f1a477"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.886381 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/141f08f5-50f7-429e-bc31-888f86f1a477-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.896891 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141f08f5-50f7-429e-bc31-888f86f1a477-kube-api-access-wmtq7" (OuterVolumeSpecName: "kube-api-access-wmtq7") pod "141f08f5-50f7-429e-bc31-888f86f1a477" (UID: "141f08f5-50f7-429e-bc31-888f86f1a477"). InnerVolumeSpecName "kube-api-access-wmtq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:26:51 crc kubenswrapper[4903]: I0128 17:26:51.988082 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmtq7\" (UniqueName: \"kubernetes.io/projected/141f08f5-50f7-429e-bc31-888f86f1a477-kube-api-access-wmtq7\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.546692 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-wp9mh" event={"ID":"141f08f5-50f7-429e-bc31-888f86f1a477","Type":"ContainerDied","Data":"3f9af11a917610470ddaff10a2946a0fb6a7dc5ca504f01845c08182dbbbdf24"} Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.547000 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f9af11a917610470ddaff10a2946a0fb6a7dc5ca504f01845c08182dbbbdf24" Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.550057 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-wp9mh" Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.846753 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.902911 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/446f97b9-ee08-4b89-8fe6-e17021aaa142-operator-scripts\") pod \"446f97b9-ee08-4b89-8fe6-e17021aaa142\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.903043 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqpl5\" (UniqueName: \"kubernetes.io/projected/446f97b9-ee08-4b89-8fe6-e17021aaa142-kube-api-access-zqpl5\") pod \"446f97b9-ee08-4b89-8fe6-e17021aaa142\" (UID: \"446f97b9-ee08-4b89-8fe6-e17021aaa142\") " Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.903675 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/446f97b9-ee08-4b89-8fe6-e17021aaa142-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "446f97b9-ee08-4b89-8fe6-e17021aaa142" (UID: "446f97b9-ee08-4b89-8fe6-e17021aaa142"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:26:52 crc kubenswrapper[4903]: I0128 17:26:52.913697 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446f97b9-ee08-4b89-8fe6-e17021aaa142-kube-api-access-zqpl5" (OuterVolumeSpecName: "kube-api-access-zqpl5") pod "446f97b9-ee08-4b89-8fe6-e17021aaa142" (UID: "446f97b9-ee08-4b89-8fe6-e17021aaa142"). InnerVolumeSpecName "kube-api-access-zqpl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:26:53 crc kubenswrapper[4903]: I0128 17:26:53.005409 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/446f97b9-ee08-4b89-8fe6-e17021aaa142-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:53 crc kubenswrapper[4903]: I0128 17:26:53.005456 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqpl5\" (UniqueName: \"kubernetes.io/projected/446f97b9-ee08-4b89-8fe6-e17021aaa142-kube-api-access-zqpl5\") on node \"crc\" DevicePath \"\"" Jan 28 17:26:53 crc kubenswrapper[4903]: I0128 17:26:53.555439 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-2943-account-create-update-8fdqj" event={"ID":"446f97b9-ee08-4b89-8fe6-e17021aaa142","Type":"ContainerDied","Data":"7439a38a4253fab13ed0bbc88d5fd3eb0e91b6af04eb15f0b17f2aee3968b1c2"} Jan 28 17:26:53 crc kubenswrapper[4903]: I0128 17:26:53.555841 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7439a38a4253fab13ed0bbc88d5fd3eb0e91b6af04eb15f0b17f2aee3968b1c2" Jan 28 17:26:53 crc kubenswrapper[4903]: I0128 17:26:53.555506 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-2943-account-create-update-8fdqj" Jan 28 17:26:54 crc kubenswrapper[4903]: I0128 17:26:54.827640 4903 scope.go:117] "RemoveContainer" containerID="8c51c6d0adc0b7bd2e1b3a4932a79291c6b97b683805065523aedfe04c911b7b" Jan 28 17:26:54 crc kubenswrapper[4903]: I0128 17:26:54.865704 4903 scope.go:117] "RemoveContainer" containerID="b455b6c2c1a175d84654681a61f0a9ee65cdcb3d108ca5b17fd86f9cac54bfde" Jan 28 17:26:54 crc kubenswrapper[4903]: I0128 17:26:54.923664 4903 scope.go:117] "RemoveContainer" containerID="82d81b0900522e9e47b68b6f811d992938d53ae7412c24caa20efc99a6da1cfe" Jan 28 17:26:54 crc kubenswrapper[4903]: I0128 17:26:54.957053 4903 scope.go:117] "RemoveContainer" containerID="3eeacf0beeb740d67e93d1be76a9e4be7ecb0a22652d5dbbdd2ae458273d7c69" Jan 28 17:26:54 crc kubenswrapper[4903]: I0128 17:26:54.999415 4903 scope.go:117] "RemoveContainer" containerID="b8ba782e487a3828fcd16534bfa296cd5b1788b29e6d891cc1269a22c10222d5" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.052255 4903 scope.go:117] "RemoveContainer" containerID="8bf41b837c65786518621ee351b531ac5c00c4e5c10b0963b2b2adb613c98db0" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.613163 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-9f94c9bc9-9mnhj"] Jan 28 17:26:55 crc kubenswrapper[4903]: E0128 17:26:55.613570 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141f08f5-50f7-429e-bc31-888f86f1a477" containerName="mariadb-database-create" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.613587 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="141f08f5-50f7-429e-bc31-888f86f1a477" containerName="mariadb-database-create" Jan 28 17:26:55 crc kubenswrapper[4903]: E0128 17:26:55.613613 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446f97b9-ee08-4b89-8fe6-e17021aaa142" containerName="mariadb-account-create-update" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.613620 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="446f97b9-ee08-4b89-8fe6-e17021aaa142" containerName="mariadb-account-create-update" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.613790 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="141f08f5-50f7-429e-bc31-888f86f1a477" containerName="mariadb-database-create" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.613809 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="446f97b9-ee08-4b89-8fe6-e17021aaa142" containerName="mariadb-account-create-update" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.617278 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.622038 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-4n72d" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.622283 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.622430 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.622651 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.632186 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-9f94c9bc9-9mnhj"] Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.672767 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-combined-ca-bundle\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.672957 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.673026 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-scripts\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.673079 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-ovndb-tls-certs\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.673105 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-octavia-run\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.673192 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data-merged\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.774971 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-combined-ca-bundle\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.775444 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.775630 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-scripts\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.775727 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-ovndb-tls-certs\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.775816 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-octavia-run\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.775920 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data-merged\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.776249 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-octavia-run\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.776424 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data-merged\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.781809 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-ovndb-tls-certs\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.781869 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-scripts\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.782698 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-combined-ca-bundle\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.784638 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data\") pod \"octavia-api-9f94c9bc9-9mnhj\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:55 crc kubenswrapper[4903]: I0128 17:26:55.939931 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:26:56 crc kubenswrapper[4903]: I0128 17:26:56.408794 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-9f94c9bc9-9mnhj"] Jan 28 17:26:56 crc kubenswrapper[4903]: I0128 17:26:56.416145 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:26:56 crc kubenswrapper[4903]: I0128 17:26:56.591072 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerStarted","Data":"2f6370a33e42966ed9ff78e447bcd07f7ea06c1b4d5f5b5156d35524ef44b63c"} Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.554270 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fw4sp"] Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.569588 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.589001 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fw4sp"] Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.711658 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-utilities\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.711759 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-catalog-content\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.711821 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8756h\" (UniqueName: \"kubernetes.io/projected/164b273e-8669-462b-a1b5-0091ff01e399-kube-api-access-8756h\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.814143 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-catalog-content\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.814233 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8756h\" (UniqueName: \"kubernetes.io/projected/164b273e-8669-462b-a1b5-0091ff01e399-kube-api-access-8756h\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.814406 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-utilities\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.814760 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-catalog-content\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.814871 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-utilities\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.834439 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8756h\" (UniqueName: \"kubernetes.io/projected/164b273e-8669-462b-a1b5-0091ff01e399-kube-api-access-8756h\") pod \"certified-operators-fw4sp\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:57 crc kubenswrapper[4903]: I0128 17:26:57.909999 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:26:58 crc kubenswrapper[4903]: I0128 17:26:58.622277 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fw4sp"] Jan 28 17:26:58 crc kubenswrapper[4903]: W0128 17:26:58.648719 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164b273e_8669_462b_a1b5_0091ff01e399.slice/crio-3fd4ffad2ad1914dfbe578e177cef61ca9a3979b3bb42cbf76de899cff07ccf8 WatchSource:0}: Error finding container 3fd4ffad2ad1914dfbe578e177cef61ca9a3979b3bb42cbf76de899cff07ccf8: Status 404 returned error can't find the container with id 3fd4ffad2ad1914dfbe578e177cef61ca9a3979b3bb42cbf76de899cff07ccf8 Jan 28 17:26:59 crc kubenswrapper[4903]: I0128 17:26:59.640209 4903 generic.go:334] "Generic (PLEG): container finished" podID="164b273e-8669-462b-a1b5-0091ff01e399" containerID="6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed" exitCode=0 Jan 28 17:26:59 crc kubenswrapper[4903]: I0128 17:26:59.640772 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fw4sp" event={"ID":"164b273e-8669-462b-a1b5-0091ff01e399","Type":"ContainerDied","Data":"6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed"} Jan 28 17:26:59 crc kubenswrapper[4903]: I0128 17:26:59.642321 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fw4sp" event={"ID":"164b273e-8669-462b-a1b5-0091ff01e399","Type":"ContainerStarted","Data":"3fd4ffad2ad1914dfbe578e177cef61ca9a3979b3bb42cbf76de899cff07ccf8"} Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.516561 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-krf4w" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.574292 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.598835 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6w9c9" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.856570 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-krf4w-config-wj9zr"] Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.864686 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.871355 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.880236 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-krf4w-config-wj9zr"] Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.977809 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-scripts\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.977948 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run-ovn\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.977979 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-additional-scripts\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.978012 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-log-ovn\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.978038 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:00 crc kubenswrapper[4903]: I0128 17:27:00.978061 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45vzw\" (UniqueName: \"kubernetes.io/projected/f1bd941a-fd91-44a8-8c70-f98f0746f194-kube-api-access-45vzw\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.079666 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run-ovn\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.079705 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-additional-scripts\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.079752 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-log-ovn\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.079784 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.079813 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45vzw\" (UniqueName: \"kubernetes.io/projected/f1bd941a-fd91-44a8-8c70-f98f0746f194-kube-api-access-45vzw\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.079877 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-scripts\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.080219 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run-ovn\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.080403 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-log-ovn\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.080579 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.081344 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-additional-scripts\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.109573 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-scripts\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.129637 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45vzw\" (UniqueName: \"kubernetes.io/projected/f1bd941a-fd91-44a8-8c70-f98f0746f194-kube-api-access-45vzw\") pod \"ovn-controller-krf4w-config-wj9zr\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.184210 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.672447 4903 generic.go:334] "Generic (PLEG): container finished" podID="164b273e-8669-462b-a1b5-0091ff01e399" containerID="376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238" exitCode=0 Jan 28 17:27:01 crc kubenswrapper[4903]: I0128 17:27:01.672507 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fw4sp" event={"ID":"164b273e-8669-462b-a1b5-0091ff01e399","Type":"ContainerDied","Data":"376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238"} Jan 28 17:27:08 crc kubenswrapper[4903]: I0128 17:27:08.746397 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fw4sp" event={"ID":"164b273e-8669-462b-a1b5-0091ff01e399","Type":"ContainerStarted","Data":"d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82"} Jan 28 17:27:08 crc kubenswrapper[4903]: I0128 17:27:08.748212 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerStarted","Data":"4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267"} Jan 28 17:27:08 crc kubenswrapper[4903]: I0128 17:27:08.766360 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-krf4w-config-wj9zr"] Jan 28 17:27:08 crc kubenswrapper[4903]: W0128 17:27:08.766484 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1bd941a_fd91_44a8_8c70_f98f0746f194.slice/crio-bbd565e133b92b2d24aa80a77dd99b22b96fa8b3de462cbf6e8dfaec63f5af36 WatchSource:0}: Error finding container bbd565e133b92b2d24aa80a77dd99b22b96fa8b3de462cbf6e8dfaec63f5af36: Status 404 returned error can't find the container with id bbd565e133b92b2d24aa80a77dd99b22b96fa8b3de462cbf6e8dfaec63f5af36 Jan 28 17:27:08 crc kubenswrapper[4903]: I0128 17:27:08.780088 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fw4sp" podStartSLOduration=3.07686125 podStartE2EDuration="11.780071978s" podCreationTimestamp="2026-01-28 17:26:57 +0000 UTC" firstStartedPulling="2026-01-28 17:26:59.643042472 +0000 UTC m=+6091.919013983" lastFinishedPulling="2026-01-28 17:27:08.34625319 +0000 UTC m=+6100.622224711" observedRunningTime="2026-01-28 17:27:08.767929918 +0000 UTC m=+6101.043901429" watchObservedRunningTime="2026-01-28 17:27:08.780071978 +0000 UTC m=+6101.056043489" Jan 28 17:27:09 crc kubenswrapper[4903]: I0128 17:27:09.758963 4903 generic.go:334] "Generic (PLEG): container finished" podID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerID="4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267" exitCode=0 Jan 28 17:27:09 crc kubenswrapper[4903]: I0128 17:27:09.759064 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerDied","Data":"4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267"} Jan 28 17:27:09 crc kubenswrapper[4903]: I0128 17:27:09.762325 4903 generic.go:334] "Generic (PLEG): container finished" podID="f1bd941a-fd91-44a8-8c70-f98f0746f194" containerID="784dbbe4c3d5afcc264b5c7a83e2b6567de35d190193e3e8b68f1cb22b81d1b4" exitCode=0 Jan 28 17:27:09 crc kubenswrapper[4903]: I0128 17:27:09.763423 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w-config-wj9zr" event={"ID":"f1bd941a-fd91-44a8-8c70-f98f0746f194","Type":"ContainerDied","Data":"784dbbe4c3d5afcc264b5c7a83e2b6567de35d190193e3e8b68f1cb22b81d1b4"} Jan 28 17:27:09 crc kubenswrapper[4903]: I0128 17:27:09.763459 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w-config-wj9zr" event={"ID":"f1bd941a-fd91-44a8-8c70-f98f0746f194","Type":"ContainerStarted","Data":"bbd565e133b92b2d24aa80a77dd99b22b96fa8b3de462cbf6e8dfaec63f5af36"} Jan 28 17:27:10 crc kubenswrapper[4903]: I0128 17:27:10.773163 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerStarted","Data":"118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0"} Jan 28 17:27:10 crc kubenswrapper[4903]: I0128 17:27:10.773555 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerStarted","Data":"899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f"} Jan 28 17:27:10 crc kubenswrapper[4903]: I0128 17:27:10.797679 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-9f94c9bc9-9mnhj" podStartSLOduration=3.867799244 podStartE2EDuration="15.797659706s" podCreationTimestamp="2026-01-28 17:26:55 +0000 UTC" firstStartedPulling="2026-01-28 17:26:56.415937435 +0000 UTC m=+6088.691908946" lastFinishedPulling="2026-01-28 17:27:08.345797897 +0000 UTC m=+6100.621769408" observedRunningTime="2026-01-28 17:27:10.791148479 +0000 UTC m=+6103.067119990" watchObservedRunningTime="2026-01-28 17:27:10.797659706 +0000 UTC m=+6103.073631217" Jan 28 17:27:10 crc kubenswrapper[4903]: I0128 17:27:10.941050 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:27:10 crc kubenswrapper[4903]: I0128 17:27:10.941153 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.118468 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.203277 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run\") pod \"f1bd941a-fd91-44a8-8c70-f98f0746f194\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.203369 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-scripts\") pod \"f1bd941a-fd91-44a8-8c70-f98f0746f194\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.203405 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-additional-scripts\") pod \"f1bd941a-fd91-44a8-8c70-f98f0746f194\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.203425 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-log-ovn\") pod \"f1bd941a-fd91-44a8-8c70-f98f0746f194\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.203564 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run" (OuterVolumeSpecName: "var-run") pod "f1bd941a-fd91-44a8-8c70-f98f0746f194" (UID: "f1bd941a-fd91-44a8-8c70-f98f0746f194"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.203811 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f1bd941a-fd91-44a8-8c70-f98f0746f194" (UID: "f1bd941a-fd91-44a8-8c70-f98f0746f194"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.204272 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45vzw\" (UniqueName: \"kubernetes.io/projected/f1bd941a-fd91-44a8-8c70-f98f0746f194-kube-api-access-45vzw\") pod \"f1bd941a-fd91-44a8-8c70-f98f0746f194\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.204314 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f1bd941a-fd91-44a8-8c70-f98f0746f194" (UID: "f1bd941a-fd91-44a8-8c70-f98f0746f194"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.204343 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run-ovn\") pod \"f1bd941a-fd91-44a8-8c70-f98f0746f194\" (UID: \"f1bd941a-fd91-44a8-8c70-f98f0746f194\") " Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.204427 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f1bd941a-fd91-44a8-8c70-f98f0746f194" (UID: "f1bd941a-fd91-44a8-8c70-f98f0746f194"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.204585 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-scripts" (OuterVolumeSpecName: "scripts") pod "f1bd941a-fd91-44a8-8c70-f98f0746f194" (UID: "f1bd941a-fd91-44a8-8c70-f98f0746f194"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.204979 4903 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.205002 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.205014 4903 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f1bd941a-fd91-44a8-8c70-f98f0746f194-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.205025 4903 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.205037 4903 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f1bd941a-fd91-44a8-8c70-f98f0746f194-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.214583 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1bd941a-fd91-44a8-8c70-f98f0746f194-kube-api-access-45vzw" (OuterVolumeSpecName: "kube-api-access-45vzw") pod "f1bd941a-fd91-44a8-8c70-f98f0746f194" (UID: "f1bd941a-fd91-44a8-8c70-f98f0746f194"). InnerVolumeSpecName "kube-api-access-45vzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.306518 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45vzw\" (UniqueName: \"kubernetes.io/projected/f1bd941a-fd91-44a8-8c70-f98f0746f194-kube-api-access-45vzw\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.782052 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w-config-wj9zr" event={"ID":"f1bd941a-fd91-44a8-8c70-f98f0746f194","Type":"ContainerDied","Data":"bbd565e133b92b2d24aa80a77dd99b22b96fa8b3de462cbf6e8dfaec63f5af36"} Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.782351 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbd565e133b92b2d24aa80a77dd99b22b96fa8b3de462cbf6e8dfaec63f5af36" Jan 28 17:27:11 crc kubenswrapper[4903]: I0128 17:27:11.782138 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-wj9zr" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.207717 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-krf4w-config-wj9zr"] Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.218786 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-krf4w-config-wj9zr"] Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.337688 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-krf4w-config-76lq9"] Jan 28 17:27:12 crc kubenswrapper[4903]: E0128 17:27:12.338197 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bd941a-fd91-44a8-8c70-f98f0746f194" containerName="ovn-config" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.338220 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bd941a-fd91-44a8-8c70-f98f0746f194" containerName="ovn-config" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.338442 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1bd941a-fd91-44a8-8c70-f98f0746f194" containerName="ovn-config" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.339246 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.345991 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.411242 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-krf4w-config-76lq9"] Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.433143 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-additional-scripts\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.433218 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphfh\" (UniqueName: \"kubernetes.io/projected/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-kube-api-access-dphfh\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.433422 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1bd941a-fd91-44a8-8c70-f98f0746f194" path="/var/lib/kubelet/pods/f1bd941a-fd91-44a8-8c70-f98f0746f194/volumes" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.433855 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run-ovn\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.433998 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.434059 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-log-ovn\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.434086 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-scripts\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.535829 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run-ovn\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.535959 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.535995 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-scripts\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.536012 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-log-ovn\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.536054 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-additional-scripts\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.536086 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dphfh\" (UniqueName: \"kubernetes.io/projected/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-kube-api-access-dphfh\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.536291 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.536315 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run-ovn\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.536348 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-log-ovn\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.537096 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-additional-scripts\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.538432 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-scripts\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.579267 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dphfh\" (UniqueName: \"kubernetes.io/projected/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-kube-api-access-dphfh\") pod \"ovn-controller-krf4w-config-76lq9\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:12 crc kubenswrapper[4903]: I0128 17:27:12.660413 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:13 crc kubenswrapper[4903]: I0128 17:27:13.154309 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-krf4w-config-76lq9"] Jan 28 17:27:13 crc kubenswrapper[4903]: I0128 17:27:13.806249 4903 generic.go:334] "Generic (PLEG): container finished" podID="7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" containerID="25c04baf658d0c8c3a7ce474272c54534c9cea231fe9b6dac28b47bb213d7f18" exitCode=0 Jan 28 17:27:13 crc kubenswrapper[4903]: I0128 17:27:13.806473 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w-config-76lq9" event={"ID":"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba","Type":"ContainerDied","Data":"25c04baf658d0c8c3a7ce474272c54534c9cea231fe9b6dac28b47bb213d7f18"} Jan 28 17:27:13 crc kubenswrapper[4903]: I0128 17:27:13.806558 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w-config-76lq9" event={"ID":"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba","Type":"ContainerStarted","Data":"eef2f3ba024f33c373b6c81ff95daae2addccd672d98faeed5d8aa952ecf8eb6"} Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.183763 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.305010 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run\") pod \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.305306 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-log-ovn\") pod \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.305409 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-additional-scripts\") pod \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.305468 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run-ovn\") pod \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.305517 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dphfh\" (UniqueName: \"kubernetes.io/projected/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-kube-api-access-dphfh\") pod \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.305684 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-scripts\") pod \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\" (UID: \"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba\") " Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.306991 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" (UID: "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.307009 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run" (OuterVolumeSpecName: "var-run") pod "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" (UID: "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.307052 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" (UID: "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.308097 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" (UID: "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.309282 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-scripts" (OuterVolumeSpecName: "scripts") pod "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" (UID: "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.312313 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-kube-api-access-dphfh" (OuterVolumeSpecName: "kube-api-access-dphfh") pod "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" (UID: "7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba"). InnerVolumeSpecName "kube-api-access-dphfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.407513 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.407560 4903 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.407569 4903 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.407580 4903 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.407588 4903 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.407598 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dphfh\" (UniqueName: \"kubernetes.io/projected/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba-kube-api-access-dphfh\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.826586 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-krf4w-config-76lq9" event={"ID":"7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba","Type":"ContainerDied","Data":"eef2f3ba024f33c373b6c81ff95daae2addccd672d98faeed5d8aa952ecf8eb6"} Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.826636 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef2f3ba024f33c373b6c81ff95daae2addccd672d98faeed5d8aa952ecf8eb6" Jan 28 17:27:15 crc kubenswrapper[4903]: I0128 17:27:15.826657 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-krf4w-config-76lq9" Jan 28 17:27:16 crc kubenswrapper[4903]: I0128 17:27:16.280306 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-krf4w-config-76lq9"] Jan 28 17:27:16 crc kubenswrapper[4903]: I0128 17:27:16.314910 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-krf4w-config-76lq9"] Jan 28 17:27:16 crc kubenswrapper[4903]: I0128 17:27:16.426338 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" path="/var/lib/kubelet/pods/7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba/volumes" Jan 28 17:27:17 crc kubenswrapper[4903]: I0128 17:27:17.910558 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:27:17 crc kubenswrapper[4903]: I0128 17:27:17.910910 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:27:17 crc kubenswrapper[4903]: I0128 17:27:17.961438 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:27:18 crc kubenswrapper[4903]: I0128 17:27:18.898254 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:27:18 crc kubenswrapper[4903]: I0128 17:27:18.955949 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fw4sp"] Jan 28 17:27:20 crc kubenswrapper[4903]: I0128 17:27:20.870396 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fw4sp" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="registry-server" containerID="cri-o://d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82" gracePeriod=2 Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.291653 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.424452 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-catalog-content\") pod \"164b273e-8669-462b-a1b5-0091ff01e399\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.424509 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-utilities\") pod \"164b273e-8669-462b-a1b5-0091ff01e399\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.424662 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8756h\" (UniqueName: \"kubernetes.io/projected/164b273e-8669-462b-a1b5-0091ff01e399-kube-api-access-8756h\") pod \"164b273e-8669-462b-a1b5-0091ff01e399\" (UID: \"164b273e-8669-462b-a1b5-0091ff01e399\") " Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.428505 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-utilities" (OuterVolumeSpecName: "utilities") pod "164b273e-8669-462b-a1b5-0091ff01e399" (UID: "164b273e-8669-462b-a1b5-0091ff01e399"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.430441 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164b273e-8669-462b-a1b5-0091ff01e399-kube-api-access-8756h" (OuterVolumeSpecName: "kube-api-access-8756h") pod "164b273e-8669-462b-a1b5-0091ff01e399" (UID: "164b273e-8669-462b-a1b5-0091ff01e399"). InnerVolumeSpecName "kube-api-access-8756h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.487584 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "164b273e-8669-462b-a1b5-0091ff01e399" (UID: "164b273e-8669-462b-a1b5-0091ff01e399"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.527192 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8756h\" (UniqueName: \"kubernetes.io/projected/164b273e-8669-462b-a1b5-0091ff01e399-kube-api-access-8756h\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.527221 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.527230 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164b273e-8669-462b-a1b5-0091ff01e399-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.879047 4903 generic.go:334] "Generic (PLEG): container finished" podID="164b273e-8669-462b-a1b5-0091ff01e399" containerID="d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82" exitCode=0 Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.879137 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fw4sp" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.879147 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fw4sp" event={"ID":"164b273e-8669-462b-a1b5-0091ff01e399","Type":"ContainerDied","Data":"d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82"} Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.879815 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fw4sp" event={"ID":"164b273e-8669-462b-a1b5-0091ff01e399","Type":"ContainerDied","Data":"3fd4ffad2ad1914dfbe578e177cef61ca9a3979b3bb42cbf76de899cff07ccf8"} Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.879851 4903 scope.go:117] "RemoveContainer" containerID="d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.914982 4903 scope.go:117] "RemoveContainer" containerID="376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.930647 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fw4sp"] Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.941427 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fw4sp"] Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.946369 4903 scope.go:117] "RemoveContainer" containerID="6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.987513 4903 scope.go:117] "RemoveContainer" containerID="d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82" Jan 28 17:27:21 crc kubenswrapper[4903]: E0128 17:27:21.988064 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82\": container with ID starting with d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82 not found: ID does not exist" containerID="d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.988113 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82"} err="failed to get container status \"d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82\": rpc error: code = NotFound desc = could not find container \"d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82\": container with ID starting with d2f575deffee78ec66e322cad28787f49a421142785b0dc34f5b23bccfec5a82 not found: ID does not exist" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.988139 4903 scope.go:117] "RemoveContainer" containerID="376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238" Jan 28 17:27:21 crc kubenswrapper[4903]: E0128 17:27:21.988443 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238\": container with ID starting with 376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238 not found: ID does not exist" containerID="376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.988485 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238"} err="failed to get container status \"376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238\": rpc error: code = NotFound desc = could not find container \"376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238\": container with ID starting with 376c160727281fa00323239fa249af9c92dd642d9a18b559b972691ed491f238 not found: ID does not exist" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.988510 4903 scope.go:117] "RemoveContainer" containerID="6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed" Jan 28 17:27:21 crc kubenswrapper[4903]: E0128 17:27:21.988775 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed\": container with ID starting with 6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed not found: ID does not exist" containerID="6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed" Jan 28 17:27:21 crc kubenswrapper[4903]: I0128 17:27:21.988800 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed"} err="failed to get container status \"6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed\": rpc error: code = NotFound desc = could not find container \"6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed\": container with ID starting with 6098efd2916cdd296f70f9f59a8dbad0397e61a884efd9dda29bcd7f446c98ed not found: ID does not exist" Jan 28 17:27:22 crc kubenswrapper[4903]: I0128 17:27:22.422489 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="164b273e-8669-462b-a1b5-0091ff01e399" path="/var/lib/kubelet/pods/164b273e-8669-462b-a1b5-0091ff01e399/volumes" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.316345 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-b74jf"] Jan 28 17:27:27 crc kubenswrapper[4903]: E0128 17:27:27.317443 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="extract-utilities" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.317461 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="extract-utilities" Jan 28 17:27:27 crc kubenswrapper[4903]: E0128 17:27:27.317478 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="registry-server" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.317485 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="registry-server" Jan 28 17:27:27 crc kubenswrapper[4903]: E0128 17:27:27.317496 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" containerName="ovn-config" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.317503 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" containerName="ovn-config" Jan 28 17:27:27 crc kubenswrapper[4903]: E0128 17:27:27.317587 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="extract-content" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.317596 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="extract-content" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.317813 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff77a32-e3c8-4cf9-bcc6-686fbb85c3ba" containerName="ovn-config" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.317834 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="164b273e-8669-462b-a1b5-0091ff01e399" containerName="registry-server" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.319118 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.321452 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.321837 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.326781 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-b74jf"] Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.340386 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.469439 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-hm-ports\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.469828 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-config-data\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.470004 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-config-data-merged\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.470132 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-scripts\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.571955 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-config-data\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.572024 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-config-data-merged\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.572080 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-scripts\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.572199 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-hm-ports\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.573686 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-hm-ports\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.580292 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-config-data-merged\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.581802 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-scripts\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.595961 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6d9bb0-ce8d-4bff-865e-f32287ecde3d-config-data\") pod \"octavia-rsyslog-b74jf\" (UID: \"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d\") " pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:27 crc kubenswrapper[4903]: I0128 17:27:27.662431 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.037438 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-tx6hm"] Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.039827 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.043756 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.061589 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-tx6hm"] Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.197777 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/dc48e585-9285-4022-8f6b-805735b2247b-amphora-image\") pod \"octavia-image-upload-65dd99cb46-tx6hm\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.198114 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc48e585-9285-4022-8f6b-805735b2247b-httpd-config\") pod \"octavia-image-upload-65dd99cb46-tx6hm\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.300817 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc48e585-9285-4022-8f6b-805735b2247b-httpd-config\") pod \"octavia-image-upload-65dd99cb46-tx6hm\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.301164 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/dc48e585-9285-4022-8f6b-805735b2247b-amphora-image\") pod \"octavia-image-upload-65dd99cb46-tx6hm\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.301892 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/dc48e585-9285-4022-8f6b-805735b2247b-amphora-image\") pod \"octavia-image-upload-65dd99cb46-tx6hm\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.302900 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-b74jf"] Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.308290 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc48e585-9285-4022-8f6b-805735b2247b-httpd-config\") pod \"octavia-image-upload-65dd99cb46-tx6hm\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.388171 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-b74jf"] Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.388436 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.883994 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-tx6hm"] Jan 28 17:27:28 crc kubenswrapper[4903]: W0128 17:27:28.889027 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc48e585_9285_4022_8f6b_805735b2247b.slice/crio-065a4f48599a9451a7e76199bec94fac8b2118937f9aaede1ae396f5fc235fe5 WatchSource:0}: Error finding container 065a4f48599a9451a7e76199bec94fac8b2118937f9aaede1ae396f5fc235fe5: Status 404 returned error can't find the container with id 065a4f48599a9451a7e76199bec94fac8b2118937f9aaede1ae396f5fc235fe5 Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.957638 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-b74jf" event={"ID":"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d","Type":"ContainerStarted","Data":"96a1ac78475cf474cf9ad8c5561329ebae48ea337a024f67d937af7ec1d798a7"} Jan 28 17:27:28 crc kubenswrapper[4903]: I0128 17:27:28.959231 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" event={"ID":"dc48e585-9285-4022-8f6b-805735b2247b","Type":"ContainerStarted","Data":"065a4f48599a9451a7e76199bec94fac8b2118937f9aaede1ae396f5fc235fe5"} Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.415856 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-cmhx7"] Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.418979 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.421225 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.426070 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-cmhx7"] Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.536697 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.536799 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-scripts\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.536861 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-combined-ca-bundle\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.536894 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data-merged\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.638921 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.639040 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-scripts\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.639149 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-combined-ca-bundle\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.639191 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data-merged\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.640337 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data-merged\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.645591 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.651803 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-scripts\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.653474 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-combined-ca-bundle\") pod \"octavia-db-sync-cmhx7\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:29 crc kubenswrapper[4903]: I0128 17:27:29.751474 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:30 crc kubenswrapper[4903]: I0128 17:27:30.561384 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-cmhx7"] Jan 28 17:27:30 crc kubenswrapper[4903]: I0128 17:27:30.596424 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:27:30 crc kubenswrapper[4903]: I0128 17:27:30.724393 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:27:30 crc kubenswrapper[4903]: I0128 17:27:30.984034 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-cmhx7" event={"ID":"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e","Type":"ContainerStarted","Data":"e90929c8cc136b58d8bb86c9b71cc1b96b4e250783181c4081a2a4e5221ee4d8"} Jan 28 17:27:30 crc kubenswrapper[4903]: I0128 17:27:30.987326 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-b74jf" event={"ID":"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d","Type":"ContainerStarted","Data":"b57ed6ef4b3258dac2124eb85cad1c880d69dfc93b32132c6d3012cb37b41f85"} Jan 28 17:27:32 crc kubenswrapper[4903]: I0128 17:27:31.999820 4903 generic.go:334] "Generic (PLEG): container finished" podID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" containerID="f368d7e804df4ce989ce29411c561fc04a046233891fe69cb7a992cd1bd2df5d" exitCode=0 Jan 28 17:27:32 crc kubenswrapper[4903]: I0128 17:27:31.999880 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-cmhx7" event={"ID":"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e","Type":"ContainerDied","Data":"f368d7e804df4ce989ce29411c561fc04a046233891fe69cb7a992cd1bd2df5d"} Jan 28 17:27:33 crc kubenswrapper[4903]: I0128 17:27:33.012136 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-cmhx7" event={"ID":"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e","Type":"ContainerStarted","Data":"0bf86165bbb46d121c5e6392ba6b328c0c3dd2fd07dda191f67558a3b04e5bba"} Jan 28 17:27:33 crc kubenswrapper[4903]: I0128 17:27:33.035956 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-cmhx7" podStartSLOduration=4.03593814 podStartE2EDuration="4.03593814s" podCreationTimestamp="2026-01-28 17:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:27:33.028195161 +0000 UTC m=+6125.304166672" watchObservedRunningTime="2026-01-28 17:27:33.03593814 +0000 UTC m=+6125.311909651" Jan 28 17:27:34 crc kubenswrapper[4903]: I0128 17:27:34.026408 4903 generic.go:334] "Generic (PLEG): container finished" podID="2e6d9bb0-ce8d-4bff-865e-f32287ecde3d" containerID="b57ed6ef4b3258dac2124eb85cad1c880d69dfc93b32132c6d3012cb37b41f85" exitCode=0 Jan 28 17:27:34 crc kubenswrapper[4903]: I0128 17:27:34.026494 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-b74jf" event={"ID":"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d","Type":"ContainerDied","Data":"b57ed6ef4b3258dac2124eb85cad1c880d69dfc93b32132c6d3012cb37b41f85"} Jan 28 17:27:37 crc kubenswrapper[4903]: I0128 17:27:37.059832 4903 generic.go:334] "Generic (PLEG): container finished" podID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" containerID="0bf86165bbb46d121c5e6392ba6b328c0c3dd2fd07dda191f67558a3b04e5bba" exitCode=0 Jan 28 17:27:37 crc kubenswrapper[4903]: I0128 17:27:37.059934 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-cmhx7" event={"ID":"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e","Type":"ContainerDied","Data":"0bf86165bbb46d121c5e6392ba6b328c0c3dd2fd07dda191f67558a3b04e5bba"} Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.071160 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" event={"ID":"dc48e585-9285-4022-8f6b-805735b2247b","Type":"ContainerStarted","Data":"88872ba05ff6683e838f4f44dd63e95937ed028220b451c609a2e55b8f21edab"} Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.075912 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-b74jf" event={"ID":"2e6d9bb0-ce8d-4bff-865e-f32287ecde3d","Type":"ContainerStarted","Data":"1ff9c44b8a34abef3dd272c910a470902dc38dcbc1181f93ec89b9ac558d9cf3"} Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.076259 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.122910 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-b74jf" podStartSLOduration=2.013296434 podStartE2EDuration="11.122891207s" podCreationTimestamp="2026-01-28 17:27:27 +0000 UTC" firstStartedPulling="2026-01-28 17:27:28.283285802 +0000 UTC m=+6120.559257313" lastFinishedPulling="2026-01-28 17:27:37.392880575 +0000 UTC m=+6129.668852086" observedRunningTime="2026-01-28 17:27:38.113132032 +0000 UTC m=+6130.389103553" watchObservedRunningTime="2026-01-28 17:27:38.122891207 +0000 UTC m=+6130.398862718" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.500522 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.529111 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data\") pod \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.529560 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data-merged\") pod \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.529780 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-combined-ca-bundle\") pod \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.529909 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-scripts\") pod \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\" (UID: \"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e\") " Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.536677 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-scripts" (OuterVolumeSpecName: "scripts") pod "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" (UID: "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.536692 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data" (OuterVolumeSpecName: "config-data") pod "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" (UID: "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.564631 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" (UID: "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.568100 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" (UID: "a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.632922 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.632959 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.632969 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:38 crc kubenswrapper[4903]: I0128 17:27:38.632977 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.086218 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-cmhx7" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.086435 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-cmhx7" event={"ID":"a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e","Type":"ContainerDied","Data":"e90929c8cc136b58d8bb86c9b71cc1b96b4e250783181c4081a2a4e5221ee4d8"} Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.086654 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90929c8cc136b58d8bb86c9b71cc1b96b4e250783181c4081a2a4e5221ee4d8" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.090519 4903 generic.go:334] "Generic (PLEG): container finished" podID="dc48e585-9285-4022-8f6b-805735b2247b" containerID="88872ba05ff6683e838f4f44dd63e95937ed028220b451c609a2e55b8f21edab" exitCode=0 Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.090608 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" event={"ID":"dc48e585-9285-4022-8f6b-805735b2247b","Type":"ContainerDied","Data":"88872ba05ff6683e838f4f44dd63e95937ed028220b451c609a2e55b8f21edab"} Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.546730 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6b947f8865-c5x5j"] Jan 28 17:27:39 crc kubenswrapper[4903]: E0128 17:27:39.548685 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" containerName="init" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.548787 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" containerName="init" Jan 28 17:27:39 crc kubenswrapper[4903]: E0128 17:27:39.548854 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" containerName="octavia-db-sync" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.548864 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" containerName="octavia-db-sync" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.549734 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" containerName="octavia-db-sync" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.555299 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.565648 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.565710 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.599545 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6b947f8865-c5x5j"] Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.659964 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-public-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.660031 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b871c170-662e-4f2d-8f9a-36e39b8b6750-config-data-merged\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.660100 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-config-data\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.660151 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-ovndb-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.660170 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-internal-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.660199 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-scripts\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.660217 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/b871c170-662e-4f2d-8f9a-36e39b8b6750-octavia-run\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.660361 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-combined-ca-bundle\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.766972 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-internal-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.767254 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-scripts\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.767360 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/b871c170-662e-4f2d-8f9a-36e39b8b6750-octavia-run\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.767452 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-combined-ca-bundle\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.767669 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-public-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.767796 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b871c170-662e-4f2d-8f9a-36e39b8b6750-config-data-merged\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.767897 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-config-data\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.768021 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-ovndb-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.768016 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/b871c170-662e-4f2d-8f9a-36e39b8b6750-octavia-run\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.768331 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/b871c170-662e-4f2d-8f9a-36e39b8b6750-config-data-merged\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.771680 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-ovndb-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.771796 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-config-data\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.772646 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-public-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.772996 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-scripts\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.773290 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-combined-ca-bundle\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.773780 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b871c170-662e-4f2d-8f9a-36e39b8b6750-internal-tls-certs\") pod \"octavia-api-6b947f8865-c5x5j\" (UID: \"b871c170-662e-4f2d-8f9a-36e39b8b6750\") " pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:39 crc kubenswrapper[4903]: I0128 17:27:39.893326 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:40 crc kubenswrapper[4903]: I0128 17:27:40.101824 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" event={"ID":"dc48e585-9285-4022-8f6b-805735b2247b","Type":"ContainerStarted","Data":"121ccc380a35dac833a3c20eeed9933563bfe8ada54941c0f6641c80e0751d22"} Jan 28 17:27:40 crc kubenswrapper[4903]: I0128 17:27:40.124342 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" podStartSLOduration=3.571127788 podStartE2EDuration="12.124325587s" podCreationTimestamp="2026-01-28 17:27:28 +0000 UTC" firstStartedPulling="2026-01-28 17:27:28.891773248 +0000 UTC m=+6121.167744759" lastFinishedPulling="2026-01-28 17:27:37.444971047 +0000 UTC m=+6129.720942558" observedRunningTime="2026-01-28 17:27:40.119914517 +0000 UTC m=+6132.395886028" watchObservedRunningTime="2026-01-28 17:27:40.124325587 +0000 UTC m=+6132.400297098" Jan 28 17:27:40 crc kubenswrapper[4903]: I0128 17:27:40.378207 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6b947f8865-c5x5j"] Jan 28 17:27:41 crc kubenswrapper[4903]: I0128 17:27:41.120824 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b947f8865-c5x5j" event={"ID":"b871c170-662e-4f2d-8f9a-36e39b8b6750","Type":"ContainerStarted","Data":"8ffbbcc1d247ce4fc69c3a0714a1062e017b844b5a5e0323b0e8c1b41608f509"} Jan 28 17:27:42 crc kubenswrapper[4903]: I0128 17:27:42.131175 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b947f8865-c5x5j" event={"ID":"b871c170-662e-4f2d-8f9a-36e39b8b6750","Type":"ContainerStarted","Data":"14def006aa51303b893545e7b89978d0238d6f827e1bc2902b40a55d13d6eda3"} Jan 28 17:27:42 crc kubenswrapper[4903]: I0128 17:27:42.693686 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-b74jf" Jan 28 17:27:43 crc kubenswrapper[4903]: I0128 17:27:43.140237 4903 generic.go:334] "Generic (PLEG): container finished" podID="b871c170-662e-4f2d-8f9a-36e39b8b6750" containerID="14def006aa51303b893545e7b89978d0238d6f827e1bc2902b40a55d13d6eda3" exitCode=0 Jan 28 17:27:43 crc kubenswrapper[4903]: I0128 17:27:43.140281 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b947f8865-c5x5j" event={"ID":"b871c170-662e-4f2d-8f9a-36e39b8b6750","Type":"ContainerDied","Data":"14def006aa51303b893545e7b89978d0238d6f827e1bc2902b40a55d13d6eda3"} Jan 28 17:27:44 crc kubenswrapper[4903]: I0128 17:27:44.159336 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b947f8865-c5x5j" event={"ID":"b871c170-662e-4f2d-8f9a-36e39b8b6750","Type":"ContainerStarted","Data":"f8c8fae00cb01fdfc7a9624b60f5fdc4250e6c77295736f91fd21ecf257c7571"} Jan 28 17:27:44 crc kubenswrapper[4903]: I0128 17:27:44.160014 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:44 crc kubenswrapper[4903]: I0128 17:27:44.160038 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6b947f8865-c5x5j" event={"ID":"b871c170-662e-4f2d-8f9a-36e39b8b6750","Type":"ContainerStarted","Data":"4d281c475fc146733d0acd6d1ff2e84988cc7ea2b2f581cd9b509516227d8888"} Jan 28 17:27:44 crc kubenswrapper[4903]: I0128 17:27:44.160069 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.556953 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6b947f8865-c5x5j" podStartSLOduration=6.556930429 podStartE2EDuration="6.556930429s" podCreationTimestamp="2026-01-28 17:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:27:44.187040591 +0000 UTC m=+6136.463012122" watchObservedRunningTime="2026-01-28 17:27:45.556930429 +0000 UTC m=+6137.832901940" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.562482 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rzstr"] Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.564645 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.574827 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzstr"] Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.584490 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4k5\" (UniqueName: \"kubernetes.io/projected/89e137b4-8dc4-4500-8318-fe8f47f56e1b-kube-api-access-9h4k5\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.584536 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89e137b4-8dc4-4500-8318-fe8f47f56e1b-utilities\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.584648 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89e137b4-8dc4-4500-8318-fe8f47f56e1b-catalog-content\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.687619 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89e137b4-8dc4-4500-8318-fe8f47f56e1b-catalog-content\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.687762 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h4k5\" (UniqueName: \"kubernetes.io/projected/89e137b4-8dc4-4500-8318-fe8f47f56e1b-kube-api-access-9h4k5\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.687794 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89e137b4-8dc4-4500-8318-fe8f47f56e1b-utilities\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.688237 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89e137b4-8dc4-4500-8318-fe8f47f56e1b-catalog-content\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.688315 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89e137b4-8dc4-4500-8318-fe8f47f56e1b-utilities\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.712266 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h4k5\" (UniqueName: \"kubernetes.io/projected/89e137b4-8dc4-4500-8318-fe8f47f56e1b-kube-api-access-9h4k5\") pod \"community-operators-rzstr\" (UID: \"89e137b4-8dc4-4500-8318-fe8f47f56e1b\") " pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:45 crc kubenswrapper[4903]: I0128 17:27:45.896713 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:46 crc kubenswrapper[4903]: I0128 17:27:46.351791 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzstr"] Jan 28 17:27:47 crc kubenswrapper[4903]: I0128 17:27:47.218553 4903 generic.go:334] "Generic (PLEG): container finished" podID="89e137b4-8dc4-4500-8318-fe8f47f56e1b" containerID="f4ba65b0779beda53fb8cee13c25ef792d36f58e3e2977d405468e3ee903ddb4" exitCode=0 Jan 28 17:27:47 crc kubenswrapper[4903]: I0128 17:27:47.218579 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzstr" event={"ID":"89e137b4-8dc4-4500-8318-fe8f47f56e1b","Type":"ContainerDied","Data":"f4ba65b0779beda53fb8cee13c25ef792d36f58e3e2977d405468e3ee903ddb4"} Jan 28 17:27:47 crc kubenswrapper[4903]: I0128 17:27:47.219047 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzstr" event={"ID":"89e137b4-8dc4-4500-8318-fe8f47f56e1b","Type":"ContainerStarted","Data":"4bd806b6561af470d1f3d3c7cb4c8dc0d48c9cf35dcd5f4f0557904dbf61b354"} Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.549460 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5mdx7"] Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.559557 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.574706 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5mdx7"] Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.597494 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-utilities\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.597634 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2t6\" (UniqueName: \"kubernetes.io/projected/92a654e1-6894-483a-b7bf-f699ce05e2c7-kube-api-access-zr2t6\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.597767 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-catalog-content\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.700010 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-catalog-content\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.700157 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-utilities\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.700197 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2t6\" (UniqueName: \"kubernetes.io/projected/92a654e1-6894-483a-b7bf-f699ce05e2c7-kube-api-access-zr2t6\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.700988 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-catalog-content\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.701857 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-utilities\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.720382 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2t6\" (UniqueName: \"kubernetes.io/projected/92a654e1-6894-483a-b7bf-f699ce05e2c7-kube-api-access-zr2t6\") pod \"redhat-operators-5mdx7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:48 crc kubenswrapper[4903]: I0128 17:27:48.901103 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:49 crc kubenswrapper[4903]: I0128 17:27:49.443494 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5mdx7"] Jan 28 17:27:49 crc kubenswrapper[4903]: W0128 17:27:49.453206 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a654e1_6894_483a_b7bf_f699ce05e2c7.slice/crio-780750ba2b932de11f0a5b4ce4b08e476fe87a2cfd2cb24b2af7d808dc467602 WatchSource:0}: Error finding container 780750ba2b932de11f0a5b4ce4b08e476fe87a2cfd2cb24b2af7d808dc467602: Status 404 returned error can't find the container with id 780750ba2b932de11f0a5b4ce4b08e476fe87a2cfd2cb24b2af7d808dc467602 Jan 28 17:27:50 crc kubenswrapper[4903]: I0128 17:27:50.264685 4903 generic.go:334] "Generic (PLEG): container finished" podID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerID="0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55" exitCode=0 Jan 28 17:27:50 crc kubenswrapper[4903]: I0128 17:27:50.265009 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5mdx7" event={"ID":"92a654e1-6894-483a-b7bf-f699ce05e2c7","Type":"ContainerDied","Data":"0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55"} Jan 28 17:27:50 crc kubenswrapper[4903]: I0128 17:27:50.265039 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5mdx7" event={"ID":"92a654e1-6894-483a-b7bf-f699ce05e2c7","Type":"ContainerStarted","Data":"780750ba2b932de11f0a5b4ce4b08e476fe87a2cfd2cb24b2af7d808dc467602"} Jan 28 17:27:53 crc kubenswrapper[4903]: I0128 17:27:53.300063 4903 generic.go:334] "Generic (PLEG): container finished" podID="89e137b4-8dc4-4500-8318-fe8f47f56e1b" containerID="e688a4c6361491f51dce891f845103fc46ce30145e04de7e9f51785093c2de5b" exitCode=0 Jan 28 17:27:53 crc kubenswrapper[4903]: I0128 17:27:53.300948 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzstr" event={"ID":"89e137b4-8dc4-4500-8318-fe8f47f56e1b","Type":"ContainerDied","Data":"e688a4c6361491f51dce891f845103fc46ce30145e04de7e9f51785093c2de5b"} Jan 28 17:27:54 crc kubenswrapper[4903]: I0128 17:27:54.315040 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5mdx7" event={"ID":"92a654e1-6894-483a-b7bf-f699ce05e2c7","Type":"ContainerStarted","Data":"473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4"} Jan 28 17:27:55 crc kubenswrapper[4903]: I0128 17:27:55.341204 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzstr" event={"ID":"89e137b4-8dc4-4500-8318-fe8f47f56e1b","Type":"ContainerStarted","Data":"1394c52d12fbcbad1a40e28316b1fdfe66817fae6fe0aebc1bc303f1b8822197"} Jan 28 17:27:55 crc kubenswrapper[4903]: I0128 17:27:55.374352 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rzstr" podStartSLOduration=2.872000326 podStartE2EDuration="10.37433034s" podCreationTimestamp="2026-01-28 17:27:45 +0000 UTC" firstStartedPulling="2026-01-28 17:27:47.220587506 +0000 UTC m=+6139.496559017" lastFinishedPulling="2026-01-28 17:27:54.72291752 +0000 UTC m=+6146.998889031" observedRunningTime="2026-01-28 17:27:55.364069371 +0000 UTC m=+6147.640040882" watchObservedRunningTime="2026-01-28 17:27:55.37433034 +0000 UTC m=+6147.650301851" Jan 28 17:27:55 crc kubenswrapper[4903]: I0128 17:27:55.896925 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:55 crc kubenswrapper[4903]: I0128 17:27:55.897248 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:27:56 crc kubenswrapper[4903]: I0128 17:27:56.350066 4903 generic.go:334] "Generic (PLEG): container finished" podID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerID="473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4" exitCode=0 Jan 28 17:27:56 crc kubenswrapper[4903]: I0128 17:27:56.350142 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5mdx7" event={"ID":"92a654e1-6894-483a-b7bf-f699ce05e2c7","Type":"ContainerDied","Data":"473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4"} Jan 28 17:27:56 crc kubenswrapper[4903]: I0128 17:27:56.614003 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:27:56 crc kubenswrapper[4903]: I0128 17:27:56.614355 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:27:56 crc kubenswrapper[4903]: I0128 17:27:56.950049 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rzstr" podUID="89e137b4-8dc4-4500-8318-fe8f47f56e1b" containerName="registry-server" probeResult="failure" output=< Jan 28 17:27:56 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:27:56 crc kubenswrapper[4903]: > Jan 28 17:27:58 crc kubenswrapper[4903]: I0128 17:27:58.375664 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5mdx7" event={"ID":"92a654e1-6894-483a-b7bf-f699ce05e2c7","Type":"ContainerStarted","Data":"73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92"} Jan 28 17:27:58 crc kubenswrapper[4903]: I0128 17:27:58.901818 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:58 crc kubenswrapper[4903]: I0128 17:27:58.901897 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:27:59 crc kubenswrapper[4903]: I0128 17:27:59.898159 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:27:59 crc kubenswrapper[4903]: I0128 17:27:59.925106 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5mdx7" podStartSLOduration=6.967404931 podStartE2EDuration="11.925087171s" podCreationTimestamp="2026-01-28 17:27:48 +0000 UTC" firstStartedPulling="2026-01-28 17:27:52.194600878 +0000 UTC m=+6144.470572389" lastFinishedPulling="2026-01-28 17:27:57.152283118 +0000 UTC m=+6149.428254629" observedRunningTime="2026-01-28 17:27:58.398512072 +0000 UTC m=+6150.674483573" watchObservedRunningTime="2026-01-28 17:27:59.925087171 +0000 UTC m=+6152.201058682" Jan 28 17:27:59 crc kubenswrapper[4903]: I0128 17:27:59.955058 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5mdx7" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" probeResult="failure" output=< Jan 28 17:27:59 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:27:59 crc kubenswrapper[4903]: > Jan 28 17:28:00 crc kubenswrapper[4903]: I0128 17:28:00.160899 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6b947f8865-c5x5j" Jan 28 17:28:00 crc kubenswrapper[4903]: I0128 17:28:00.234563 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-9f94c9bc9-9mnhj"] Jan 28 17:28:00 crc kubenswrapper[4903]: I0128 17:28:00.235165 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-9f94c9bc9-9mnhj" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api" containerID="cri-o://899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f" gracePeriod=30 Jan 28 17:28:00 crc kubenswrapper[4903]: I0128 17:28:00.235626 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-9f94c9bc9-9mnhj" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api-provider-agent" containerID="cri-o://118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0" gracePeriod=30 Jan 28 17:28:01 crc kubenswrapper[4903]: I0128 17:28:01.405728 4903 generic.go:334] "Generic (PLEG): container finished" podID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerID="118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0" exitCode=0 Jan 28 17:28:01 crc kubenswrapper[4903]: I0128 17:28:01.405792 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerDied","Data":"118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0"} Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.911196 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.970981 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-ovndb-tls-certs\") pod \"920732d8-23d2-40c2-80d3-3b74e9843c96\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.971406 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-octavia-run\") pod \"920732d8-23d2-40c2-80d3-3b74e9843c96\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.971487 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-combined-ca-bundle\") pod \"920732d8-23d2-40c2-80d3-3b74e9843c96\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.971624 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data-merged\") pod \"920732d8-23d2-40c2-80d3-3b74e9843c96\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.971654 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-scripts\") pod \"920732d8-23d2-40c2-80d3-3b74e9843c96\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.971695 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data\") pod \"920732d8-23d2-40c2-80d3-3b74e9843c96\" (UID: \"920732d8-23d2-40c2-80d3-3b74e9843c96\") " Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.972158 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "920732d8-23d2-40c2-80d3-3b74e9843c96" (UID: "920732d8-23d2-40c2-80d3-3b74e9843c96"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.972492 4903 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-octavia-run\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.979708 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data" (OuterVolumeSpecName: "config-data") pod "920732d8-23d2-40c2-80d3-3b74e9843c96" (UID: "920732d8-23d2-40c2-80d3-3b74e9843c96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:28:03 crc kubenswrapper[4903]: I0128 17:28:03.982414 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-scripts" (OuterVolumeSpecName: "scripts") pod "920732d8-23d2-40c2-80d3-3b74e9843c96" (UID: "920732d8-23d2-40c2-80d3-3b74e9843c96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.034746 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "920732d8-23d2-40c2-80d3-3b74e9843c96" (UID: "920732d8-23d2-40c2-80d3-3b74e9843c96"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.062202 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "920732d8-23d2-40c2-80d3-3b74e9843c96" (UID: "920732d8-23d2-40c2-80d3-3b74e9843c96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.074915 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.074954 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.074967 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.074979 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.174663 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "920732d8-23d2-40c2-80d3-3b74e9843c96" (UID: "920732d8-23d2-40c2-80d3-3b74e9843c96"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.176830 4903 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/920732d8-23d2-40c2-80d3-3b74e9843c96-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.442428 4903 generic.go:334] "Generic (PLEG): container finished" podID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerID="899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f" exitCode=0 Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.442469 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerDied","Data":"899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f"} Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.442496 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-9f94c9bc9-9mnhj" event={"ID":"920732d8-23d2-40c2-80d3-3b74e9843c96","Type":"ContainerDied","Data":"2f6370a33e42966ed9ff78e447bcd07f7ea06c1b4d5f5b5156d35524ef44b63c"} Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.442515 4903 scope.go:117] "RemoveContainer" containerID="118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.442693 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-9f94c9bc9-9mnhj" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.479426 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-9f94c9bc9-9mnhj"] Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.495308 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-9f94c9bc9-9mnhj"] Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.533391 4903 scope.go:117] "RemoveContainer" containerID="899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.563371 4903 scope.go:117] "RemoveContainer" containerID="4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.615869 4903 scope.go:117] "RemoveContainer" containerID="118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0" Jan 28 17:28:04 crc kubenswrapper[4903]: E0128 17:28:04.616494 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0\": container with ID starting with 118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0 not found: ID does not exist" containerID="118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.616548 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0"} err="failed to get container status \"118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0\": rpc error: code = NotFound desc = could not find container \"118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0\": container with ID starting with 118e2f344e7de15a79d297814f6bd4cb2d547c2f0e035437a04a03949707adf0 not found: ID does not exist" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.616573 4903 scope.go:117] "RemoveContainer" containerID="899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f" Jan 28 17:28:04 crc kubenswrapper[4903]: E0128 17:28:04.616854 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f\": container with ID starting with 899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f not found: ID does not exist" containerID="899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.616884 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f"} err="failed to get container status \"899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f\": rpc error: code = NotFound desc = could not find container \"899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f\": container with ID starting with 899438121ba54d7ef272f8c4b2ff74ee736bd1b127206cfe935516fdf3a1e52f not found: ID does not exist" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.616905 4903 scope.go:117] "RemoveContainer" containerID="4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267" Jan 28 17:28:04 crc kubenswrapper[4903]: E0128 17:28:04.617102 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267\": container with ID starting with 4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267 not found: ID does not exist" containerID="4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267" Jan 28 17:28:04 crc kubenswrapper[4903]: I0128 17:28:04.617133 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267"} err="failed to get container status \"4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267\": rpc error: code = NotFound desc = could not find container \"4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267\": container with ID starting with 4679e77691a849e03fda6b2947bc423b328d31ccadf46555e33a3f602e6e9267 not found: ID does not exist" Jan 28 17:28:06 crc kubenswrapper[4903]: I0128 17:28:06.426090 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" path="/var/lib/kubelet/pods/920732d8-23d2-40c2-80d3-3b74e9843c96/volumes" Jan 28 17:28:06 crc kubenswrapper[4903]: I0128 17:28:06.946402 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rzstr" podUID="89e137b4-8dc4-4500-8318-fe8f47f56e1b" containerName="registry-server" probeResult="failure" output=< Jan 28 17:28:06 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:28:06 crc kubenswrapper[4903]: > Jan 28 17:28:09 crc kubenswrapper[4903]: I0128 17:28:09.954422 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5mdx7" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" probeResult="failure" output=< Jan 28 17:28:09 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:28:09 crc kubenswrapper[4903]: > Jan 28 17:28:11 crc kubenswrapper[4903]: I0128 17:28:11.891368 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-tx6hm"] Jan 28 17:28:11 crc kubenswrapper[4903]: I0128 17:28:11.893456 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" podUID="dc48e585-9285-4022-8f6b-805735b2247b" containerName="octavia-amphora-httpd" containerID="cri-o://121ccc380a35dac833a3c20eeed9933563bfe8ada54941c0f6641c80e0751d22" gracePeriod=30 Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.519748 4903 generic.go:334] "Generic (PLEG): container finished" podID="dc48e585-9285-4022-8f6b-805735b2247b" containerID="121ccc380a35dac833a3c20eeed9933563bfe8ada54941c0f6641c80e0751d22" exitCode=0 Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.519799 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" event={"ID":"dc48e585-9285-4022-8f6b-805735b2247b","Type":"ContainerDied","Data":"121ccc380a35dac833a3c20eeed9933563bfe8ada54941c0f6641c80e0751d22"} Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.519827 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" event={"ID":"dc48e585-9285-4022-8f6b-805735b2247b","Type":"ContainerDied","Data":"065a4f48599a9451a7e76199bec94fac8b2118937f9aaede1ae396f5fc235fe5"} Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.519839 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065a4f48599a9451a7e76199bec94fac8b2118937f9aaede1ae396f5fc235fe5" Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.526281 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.552830 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/dc48e585-9285-4022-8f6b-805735b2247b-amphora-image\") pod \"dc48e585-9285-4022-8f6b-805735b2247b\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.552921 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc48e585-9285-4022-8f6b-805735b2247b-httpd-config\") pod \"dc48e585-9285-4022-8f6b-805735b2247b\" (UID: \"dc48e585-9285-4022-8f6b-805735b2247b\") " Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.608960 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc48e585-9285-4022-8f6b-805735b2247b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "dc48e585-9285-4022-8f6b-805735b2247b" (UID: "dc48e585-9285-4022-8f6b-805735b2247b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.653573 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc48e585-9285-4022-8f6b-805735b2247b-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "dc48e585-9285-4022-8f6b-805735b2247b" (UID: "dc48e585-9285-4022-8f6b-805735b2247b"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.655228 4903 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/dc48e585-9285-4022-8f6b-805735b2247b-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:12 crc kubenswrapper[4903]: I0128 17:28:12.655274 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dc48e585-9285-4022-8f6b-805735b2247b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:13 crc kubenswrapper[4903]: I0128 17:28:13.527774 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-tx6hm" Jan 28 17:28:13 crc kubenswrapper[4903]: I0128 17:28:13.567890 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-tx6hm"] Jan 28 17:28:13 crc kubenswrapper[4903]: I0128 17:28:13.577361 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-tx6hm"] Jan 28 17:28:14 crc kubenswrapper[4903]: I0128 17:28:14.423667 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc48e585-9285-4022-8f6b-805735b2247b" path="/var/lib/kubelet/pods/dc48e585-9285-4022-8f6b-805735b2247b/volumes" Jan 28 17:28:15 crc kubenswrapper[4903]: I0128 17:28:15.951225 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:28:15 crc kubenswrapper[4903]: I0128 17:28:15.995909 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rzstr" Jan 28 17:28:16 crc kubenswrapper[4903]: I0128 17:28:16.578009 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzstr"] Jan 28 17:28:16 crc kubenswrapper[4903]: I0128 17:28:16.756290 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5h2p"] Jan 28 17:28:16 crc kubenswrapper[4903]: I0128 17:28:16.756601 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5h2p" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="registry-server" containerID="cri-o://566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777" gracePeriod=2 Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.296567 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.340268 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-utilities\") pod \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.340430 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m62d5\" (UniqueName: \"kubernetes.io/projected/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-kube-api-access-m62d5\") pod \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.341047 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-utilities" (OuterVolumeSpecName: "utilities") pod "ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" (UID: "ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.341313 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-catalog-content\") pod \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\" (UID: \"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b\") " Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.341828 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.349222 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-kube-api-access-m62d5" (OuterVolumeSpecName: "kube-api-access-m62d5") pod "ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" (UID: "ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b"). InnerVolumeSpecName "kube-api-access-m62d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.414130 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" (UID: "ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.442816 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.442845 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m62d5\" (UniqueName: \"kubernetes.io/projected/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b-kube-api-access-m62d5\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.566639 4903 generic.go:334] "Generic (PLEG): container finished" podID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerID="566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777" exitCode=0 Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.566694 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5h2p" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.566710 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5h2p" event={"ID":"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b","Type":"ContainerDied","Data":"566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777"} Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.566757 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5h2p" event={"ID":"ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b","Type":"ContainerDied","Data":"3b246ee9eca81692ac728d6d0ef32991f81c75563ee4a61aa645d5c15491e767"} Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.566777 4903 scope.go:117] "RemoveContainer" containerID="566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.607308 4903 scope.go:117] "RemoveContainer" containerID="48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.609922 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5h2p"] Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.619648 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5h2p"] Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.633740 4903 scope.go:117] "RemoveContainer" containerID="4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.678360 4903 scope.go:117] "RemoveContainer" containerID="566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777" Jan 28 17:28:17 crc kubenswrapper[4903]: E0128 17:28:17.678804 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777\": container with ID starting with 566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777 not found: ID does not exist" containerID="566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.678833 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777"} err="failed to get container status \"566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777\": rpc error: code = NotFound desc = could not find container \"566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777\": container with ID starting with 566bc19883708b9fa917b0a2f2b3809a57ea65d6f324982269d2c90496089777 not found: ID does not exist" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.678857 4903 scope.go:117] "RemoveContainer" containerID="48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8" Jan 28 17:28:17 crc kubenswrapper[4903]: E0128 17:28:17.679134 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8\": container with ID starting with 48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8 not found: ID does not exist" containerID="48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.679204 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8"} err="failed to get container status \"48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8\": rpc error: code = NotFound desc = could not find container \"48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8\": container with ID starting with 48ab3e31506f9f26879d9a20b031923216dc3f8c767163477e2f15d2cccb9ad8 not found: ID does not exist" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.679229 4903 scope.go:117] "RemoveContainer" containerID="4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70" Jan 28 17:28:17 crc kubenswrapper[4903]: E0128 17:28:17.679605 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70\": container with ID starting with 4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70 not found: ID does not exist" containerID="4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70" Jan 28 17:28:17 crc kubenswrapper[4903]: I0128 17:28:17.679684 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70"} err="failed to get container status \"4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70\": rpc error: code = NotFound desc = could not find container \"4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70\": container with ID starting with 4cc64ad609272ab1572a50c685a032eca834e6a89067ab9a65b6ab816e5aef70 not found: ID does not exist" Jan 28 17:28:18 crc kubenswrapper[4903]: I0128 17:28:18.424958 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" path="/var/lib/kubelet/pods/ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b/volumes" Jan 28 17:28:19 crc kubenswrapper[4903]: I0128 17:28:19.946412 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5mdx7" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" probeResult="failure" output=< Jan 28 17:28:19 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:28:19 crc kubenswrapper[4903]: > Jan 28 17:28:26 crc kubenswrapper[4903]: I0128 17:28:26.613629 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:28:26 crc kubenswrapper[4903]: I0128 17:28:26.614167 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:28:29 crc kubenswrapper[4903]: I0128 17:28:29.949668 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5mdx7" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" probeResult="failure" output=< Jan 28 17:28:29 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:28:29 crc kubenswrapper[4903]: > Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.801490 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-j6f6z"] Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802257 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="registry-server" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802269 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="registry-server" Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802280 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc48e585-9285-4022-8f6b-805735b2247b" containerName="octavia-amphora-httpd" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802286 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc48e585-9285-4022-8f6b-805735b2247b" containerName="octavia-amphora-httpd" Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802294 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802300 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api" Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802313 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="extract-content" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802319 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="extract-content" Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802336 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc48e585-9285-4022-8f6b-805735b2247b" containerName="init" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802343 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc48e585-9285-4022-8f6b-805735b2247b" containerName="init" Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802360 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api-provider-agent" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802372 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api-provider-agent" Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802384 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="init" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802392 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="init" Jan 28 17:28:31 crc kubenswrapper[4903]: E0128 17:28:31.802421 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="extract-utilities" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802431 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="extract-utilities" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802694 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api-provider-agent" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802725 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc6f8cb-e0cf-41c5-b58f-b711d0d77a0b" containerName="registry-server" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802735 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="920732d8-23d2-40c2-80d3-3b74e9843c96" containerName="octavia-api" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.802751 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc48e585-9285-4022-8f6b-805735b2247b" containerName="octavia-amphora-httpd" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.803703 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.806113 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.806990 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.809682 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.836873 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-j6f6z"] Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.926294 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2e7cd217-861f-44d4-b7c7-584451114054-hm-ports\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.926399 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-config-data\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.926437 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2e7cd217-861f-44d4-b7c7-584451114054-config-data-merged\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.926476 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-amphora-certs\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.927261 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-scripts\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:31 crc kubenswrapper[4903]: I0128 17:28:31.927335 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-combined-ca-bundle\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.029780 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-config-data\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.029860 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2e7cd217-861f-44d4-b7c7-584451114054-config-data-merged\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.029889 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-amphora-certs\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.029936 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-scripts\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.030000 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-combined-ca-bundle\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.030101 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2e7cd217-861f-44d4-b7c7-584451114054-hm-ports\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.031723 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2e7cd217-861f-44d4-b7c7-584451114054-hm-ports\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.033755 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2e7cd217-861f-44d4-b7c7-584451114054-config-data-merged\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.040358 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-combined-ca-bundle\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.052968 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-scripts\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.053942 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-amphora-certs\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.055171 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e7cd217-861f-44d4-b7c7-584451114054-config-data\") pod \"octavia-healthmanager-j6f6z\" (UID: \"2e7cd217-861f-44d4-b7c7-584451114054\") " pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:32 crc kubenswrapper[4903]: I0128 17:28:32.121518 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.254238 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-j6f6z"] Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.601658 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-xd8j6"] Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.604018 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.606949 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.608293 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.618367 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-xd8j6"] Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.678151 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-config-data\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.678229 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-scripts\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.678368 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-combined-ca-bundle\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.678392 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-amphora-certs\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.678416 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-config-data-merged\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.678491 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-hm-ports\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.716499 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6f6z" event={"ID":"2e7cd217-861f-44d4-b7c7-584451114054","Type":"ContainerStarted","Data":"3dbf97281fe3e7776882eac823e8b5f8c20e49f47b259542e87a0977f5c624ab"} Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.781051 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-config-data\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.781131 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-scripts\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.781214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-combined-ca-bundle\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.781241 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-amphora-certs\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.781264 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-config-data-merged\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.781308 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-hm-ports\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.781926 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-config-data-merged\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.782435 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-hm-ports\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.792180 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-config-data\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.793668 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-amphora-certs\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.793949 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-scripts\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.794521 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73-combined-ca-bundle\") pod \"octavia-housekeeping-xd8j6\" (UID: \"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73\") " pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:33 crc kubenswrapper[4903]: I0128 17:28:33.936780 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.550782 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-24nfn"] Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.552863 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.555888 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.556224 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.607253 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-hm-ports\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.607294 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-scripts\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.607323 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-amphora-certs\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.607356 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-config-data-merged\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.607454 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-config-data\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.607479 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-combined-ca-bundle\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.616590 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-24nfn"] Jan 28 17:28:34 crc kubenswrapper[4903]: W0128 17:28:34.675360 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc43c649d_8f5b_4aa9_a29d_5fb42e8c3f73.slice/crio-25601df21032ddce8eba52d6992b1c6c5df2f951375cf9606d00a731895af6f4 WatchSource:0}: Error finding container 25601df21032ddce8eba52d6992b1c6c5df2f951375cf9606d00a731895af6f4: Status 404 returned error can't find the container with id 25601df21032ddce8eba52d6992b1c6c5df2f951375cf9606d00a731895af6f4 Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.679014 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-xd8j6"] Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.711195 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-config-data\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.711476 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-combined-ca-bundle\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.712398 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-hm-ports\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.713443 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-scripts\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.713914 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-amphora-certs\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.713399 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-hm-ports\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.719908 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-config-data-merged\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.720868 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-config-data\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.721403 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-scripts\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.721631 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-config-data-merged\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.721744 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-combined-ca-bundle\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.726634 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/a1cb5d5d-e469-408e-ba18-e9e77cd67f41-amphora-certs\") pod \"octavia-worker-24nfn\" (UID: \"a1cb5d5d-e469-408e-ba18-e9e77cd67f41\") " pod="openstack/octavia-worker-24nfn" Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.730790 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-xd8j6" event={"ID":"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73","Type":"ContainerStarted","Data":"25601df21032ddce8eba52d6992b1c6c5df2f951375cf9606d00a731895af6f4"} Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.732705 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6f6z" event={"ID":"2e7cd217-861f-44d4-b7c7-584451114054","Type":"ContainerStarted","Data":"f7f309ee6c40ef279a1bf6cda520dc4d697e5a7dc7f5535a4914dd6c7d7741d6"} Jan 28 17:28:34 crc kubenswrapper[4903]: I0128 17:28:34.887021 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-24nfn" Jan 28 17:28:35 crc kubenswrapper[4903]: I0128 17:28:35.463278 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-24nfn"] Jan 28 17:28:35 crc kubenswrapper[4903]: I0128 17:28:35.747009 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-24nfn" event={"ID":"a1cb5d5d-e469-408e-ba18-e9e77cd67f41","Type":"ContainerStarted","Data":"7c66141f691b1b7b59664eea1f79cbf04b2a706c5482644b7348a7119bc62782"} Jan 28 17:28:36 crc kubenswrapper[4903]: I0128 17:28:36.751512 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-j6f6z"] Jan 28 17:28:36 crc kubenswrapper[4903]: I0128 17:28:36.772499 4903 generic.go:334] "Generic (PLEG): container finished" podID="2e7cd217-861f-44d4-b7c7-584451114054" containerID="f7f309ee6c40ef279a1bf6cda520dc4d697e5a7dc7f5535a4914dd6c7d7741d6" exitCode=0 Jan 28 17:28:36 crc kubenswrapper[4903]: I0128 17:28:36.772596 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6f6z" event={"ID":"2e7cd217-861f-44d4-b7c7-584451114054","Type":"ContainerDied","Data":"f7f309ee6c40ef279a1bf6cda520dc4d697e5a7dc7f5535a4914dd6c7d7741d6"} Jan 28 17:28:37 crc kubenswrapper[4903]: I0128 17:28:37.788729 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-xd8j6" event={"ID":"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73","Type":"ContainerStarted","Data":"5071d181cc781f694fdb14c510d6906e5f99f787647c25d0646088f4e4f080cc"} Jan 28 17:28:37 crc kubenswrapper[4903]: I0128 17:28:37.791940 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-j6f6z" event={"ID":"2e7cd217-861f-44d4-b7c7-584451114054","Type":"ContainerStarted","Data":"7d8b9074f63d54421884c55e9258426cb1469db4e09c5b93132f3d423dfabc3f"} Jan 28 17:28:37 crc kubenswrapper[4903]: I0128 17:28:37.792181 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:37 crc kubenswrapper[4903]: I0128 17:28:37.839264 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-j6f6z" podStartSLOduration=6.839240401 podStartE2EDuration="6.839240401s" podCreationTimestamp="2026-01-28 17:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:28:37.83148487 +0000 UTC m=+6190.107456381" watchObservedRunningTime="2026-01-28 17:28:37.839240401 +0000 UTC m=+6190.115211912" Jan 28 17:28:38 crc kubenswrapper[4903]: I0128 17:28:38.817585 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-24nfn" event={"ID":"a1cb5d5d-e469-408e-ba18-e9e77cd67f41","Type":"ContainerStarted","Data":"8b5a35b4b10c54c2c92173790d95c05cc7f0716336801d6589e373824b44afdd"} Jan 28 17:28:38 crc kubenswrapper[4903]: I0128 17:28:38.824708 4903 generic.go:334] "Generic (PLEG): container finished" podID="c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73" containerID="5071d181cc781f694fdb14c510d6906e5f99f787647c25d0646088f4e4f080cc" exitCode=0 Jan 28 17:28:38 crc kubenswrapper[4903]: I0128 17:28:38.824806 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-xd8j6" event={"ID":"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73","Type":"ContainerDied","Data":"5071d181cc781f694fdb14c510d6906e5f99f787647c25d0646088f4e4f080cc"} Jan 28 17:28:39 crc kubenswrapper[4903]: I0128 17:28:39.839942 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-xd8j6" event={"ID":"c43c649d-8f5b-4aa9-a29d-5fb42e8c3f73","Type":"ContainerStarted","Data":"ce91143629f2a501eae2cb408278c7cf4fabbfe921d54f6e0c4b9aff91913262"} Jan 28 17:28:39 crc kubenswrapper[4903]: I0128 17:28:39.841125 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:39 crc kubenswrapper[4903]: I0128 17:28:39.844027 4903 generic.go:334] "Generic (PLEG): container finished" podID="a1cb5d5d-e469-408e-ba18-e9e77cd67f41" containerID="8b5a35b4b10c54c2c92173790d95c05cc7f0716336801d6589e373824b44afdd" exitCode=0 Jan 28 17:28:39 crc kubenswrapper[4903]: I0128 17:28:39.844093 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-24nfn" event={"ID":"a1cb5d5d-e469-408e-ba18-e9e77cd67f41","Type":"ContainerDied","Data":"8b5a35b4b10c54c2c92173790d95c05cc7f0716336801d6589e373824b44afdd"} Jan 28 17:28:39 crc kubenswrapper[4903]: I0128 17:28:39.882286 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-xd8j6" podStartSLOduration=4.8201280319999995 podStartE2EDuration="6.882227288s" podCreationTimestamp="2026-01-28 17:28:33 +0000 UTC" firstStartedPulling="2026-01-28 17:28:34.691318221 +0000 UTC m=+6186.967289732" lastFinishedPulling="2026-01-28 17:28:36.753417477 +0000 UTC m=+6189.029388988" observedRunningTime="2026-01-28 17:28:39.86645872 +0000 UTC m=+6192.142430231" watchObservedRunningTime="2026-01-28 17:28:39.882227288 +0000 UTC m=+6192.158198829" Jan 28 17:28:39 crc kubenswrapper[4903]: I0128 17:28:39.954610 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5mdx7" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" probeResult="failure" output=< Jan 28 17:28:39 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:28:39 crc kubenswrapper[4903]: > Jan 28 17:28:40 crc kubenswrapper[4903]: I0128 17:28:40.856815 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-24nfn" event={"ID":"a1cb5d5d-e469-408e-ba18-e9e77cd67f41","Type":"ContainerStarted","Data":"3b93f804356ff716a3734aad971d0d73ebe4f96d8ad9dfa3d5282c7fc9b5ec51"} Jan 28 17:28:40 crc kubenswrapper[4903]: I0128 17:28:40.887778 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-24nfn" podStartSLOduration=4.270339504 podStartE2EDuration="6.887756693s" podCreationTimestamp="2026-01-28 17:28:34 +0000 UTC" firstStartedPulling="2026-01-28 17:28:35.480643542 +0000 UTC m=+6187.756615053" lastFinishedPulling="2026-01-28 17:28:38.098060721 +0000 UTC m=+6190.374032242" observedRunningTime="2026-01-28 17:28:40.883765625 +0000 UTC m=+6193.159737136" watchObservedRunningTime="2026-01-28 17:28:40.887756693 +0000 UTC m=+6193.163728204" Jan 28 17:28:41 crc kubenswrapper[4903]: I0128 17:28:41.868710 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-24nfn" Jan 28 17:28:42 crc kubenswrapper[4903]: I0128 17:28:42.048029 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7bf8-account-create-update-x5n7k"] Jan 28 17:28:42 crc kubenswrapper[4903]: I0128 17:28:42.057267 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-fbkf8"] Jan 28 17:28:42 crc kubenswrapper[4903]: I0128 17:28:42.066070 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7bf8-account-create-update-x5n7k"] Jan 28 17:28:42 crc kubenswrapper[4903]: I0128 17:28:42.075625 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-fbkf8"] Jan 28 17:28:42 crc kubenswrapper[4903]: I0128 17:28:42.427657 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64aa9df3-905f-457d-ae9e-2bbff742fe60" path="/var/lib/kubelet/pods/64aa9df3-905f-457d-ae9e-2bbff742fe60/volumes" Jan 28 17:28:42 crc kubenswrapper[4903]: I0128 17:28:42.428364 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2acc5e-acc6-4ea7-8212-927d3e2749fe" path="/var/lib/kubelet/pods/9d2acc5e-acc6-4ea7-8212-927d3e2749fe/volumes" Jan 28 17:28:47 crc kubenswrapper[4903]: I0128 17:28:47.173431 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-j6f6z" Jan 28 17:28:48 crc kubenswrapper[4903]: I0128 17:28:48.957810 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:28:48 crc kubenswrapper[4903]: I0128 17:28:48.969999 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-xd8j6" Jan 28 17:28:49 crc kubenswrapper[4903]: I0128 17:28:49.025497 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:28:49 crc kubenswrapper[4903]: I0128 17:28:49.201951 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5mdx7"] Jan 28 17:28:49 crc kubenswrapper[4903]: I0128 17:28:49.917484 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-24nfn" Jan 28 17:28:50 crc kubenswrapper[4903]: I0128 17:28:50.041114 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xhsp5"] Jan 28 17:28:50 crc kubenswrapper[4903]: I0128 17:28:50.053514 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xhsp5"] Jan 28 17:28:50 crc kubenswrapper[4903]: I0128 17:28:50.438063 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13adc33f-5819-4774-82e5-eefd361bd22c" path="/var/lib/kubelet/pods/13adc33f-5819-4774-82e5-eefd361bd22c/volumes" Jan 28 17:28:50 crc kubenswrapper[4903]: I0128 17:28:50.961812 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5mdx7" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" containerID="cri-o://73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92" gracePeriod=2 Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.509635 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.528939 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr2t6\" (UniqueName: \"kubernetes.io/projected/92a654e1-6894-483a-b7bf-f699ce05e2c7-kube-api-access-zr2t6\") pod \"92a654e1-6894-483a-b7bf-f699ce05e2c7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.529084 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-catalog-content\") pod \"92a654e1-6894-483a-b7bf-f699ce05e2c7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.529162 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-utilities\") pod \"92a654e1-6894-483a-b7bf-f699ce05e2c7\" (UID: \"92a654e1-6894-483a-b7bf-f699ce05e2c7\") " Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.530257 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-utilities" (OuterVolumeSpecName: "utilities") pod "92a654e1-6894-483a-b7bf-f699ce05e2c7" (UID: "92a654e1-6894-483a-b7bf-f699ce05e2c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.540710 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a654e1-6894-483a-b7bf-f699ce05e2c7-kube-api-access-zr2t6" (OuterVolumeSpecName: "kube-api-access-zr2t6") pod "92a654e1-6894-483a-b7bf-f699ce05e2c7" (UID: "92a654e1-6894-483a-b7bf-f699ce05e2c7"). InnerVolumeSpecName "kube-api-access-zr2t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.631052 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr2t6\" (UniqueName: \"kubernetes.io/projected/92a654e1-6894-483a-b7bf-f699ce05e2c7-kube-api-access-zr2t6\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.631094 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.660029 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92a654e1-6894-483a-b7bf-f699ce05e2c7" (UID: "92a654e1-6894-483a-b7bf-f699ce05e2c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.732947 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a654e1-6894-483a-b7bf-f699ce05e2c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.974411 4903 generic.go:334] "Generic (PLEG): container finished" podID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerID="73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92" exitCode=0 Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.974479 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5mdx7" event={"ID":"92a654e1-6894-483a-b7bf-f699ce05e2c7","Type":"ContainerDied","Data":"73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92"} Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.974547 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5mdx7" event={"ID":"92a654e1-6894-483a-b7bf-f699ce05e2c7","Type":"ContainerDied","Data":"780750ba2b932de11f0a5b4ce4b08e476fe87a2cfd2cb24b2af7d808dc467602"} Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.974560 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5mdx7" Jan 28 17:28:51 crc kubenswrapper[4903]: I0128 17:28:51.974574 4903 scope.go:117] "RemoveContainer" containerID="73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.006561 4903 scope.go:117] "RemoveContainer" containerID="473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.011674 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5mdx7"] Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.020491 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5mdx7"] Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.032428 4903 scope.go:117] "RemoveContainer" containerID="0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.076766 4903 scope.go:117] "RemoveContainer" containerID="73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92" Jan 28 17:28:52 crc kubenswrapper[4903]: E0128 17:28:52.077752 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92\": container with ID starting with 73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92 not found: ID does not exist" containerID="73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.077814 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92"} err="failed to get container status \"73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92\": rpc error: code = NotFound desc = could not find container \"73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92\": container with ID starting with 73d67d2b9019f998827bc7a62e99c4dd8f8d69f3db0c0152dbaf327624876c92 not found: ID does not exist" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.077843 4903 scope.go:117] "RemoveContainer" containerID="473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4" Jan 28 17:28:52 crc kubenswrapper[4903]: E0128 17:28:52.078576 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4\": container with ID starting with 473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4 not found: ID does not exist" containerID="473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.078657 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4"} err="failed to get container status \"473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4\": rpc error: code = NotFound desc = could not find container \"473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4\": container with ID starting with 473afda33c5647da5200cac2f12d7b6cd4781ac55c6f55520e8cc4c69524c3f4 not found: ID does not exist" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.078823 4903 scope.go:117] "RemoveContainer" containerID="0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55" Jan 28 17:28:52 crc kubenswrapper[4903]: E0128 17:28:52.079284 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55\": container with ID starting with 0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55 not found: ID does not exist" containerID="0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.079336 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55"} err="failed to get container status \"0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55\": rpc error: code = NotFound desc = could not find container \"0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55\": container with ID starting with 0226880d3f94397c7cb8d9c1b7bb5fc8f703f08a6d92ade34e0d629ab63f2b55 not found: ID does not exist" Jan 28 17:28:52 crc kubenswrapper[4903]: I0128 17:28:52.423970 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" path="/var/lib/kubelet/pods/92a654e1-6894-483a-b7bf-f699ce05e2c7/volumes" Jan 28 17:28:55 crc kubenswrapper[4903]: I0128 17:28:55.265187 4903 scope.go:117] "RemoveContainer" containerID="53592c7ec4a67ce99e9d73a00d8e95198e278a9324c810f0a38795cac229ed02" Jan 28 17:28:55 crc kubenswrapper[4903]: I0128 17:28:55.300277 4903 scope.go:117] "RemoveContainer" containerID="808bf977ddaa90666b89a675ac9bffc1c6ae565cb10e01b70a556a566c321959" Jan 28 17:28:55 crc kubenswrapper[4903]: I0128 17:28:55.351083 4903 scope.go:117] "RemoveContainer" containerID="81be15fdd8343214a566a7022156eaf6e27036b3320ea1267e253277beb74449" Jan 28 17:28:55 crc kubenswrapper[4903]: I0128 17:28:55.386108 4903 scope.go:117] "RemoveContainer" containerID="d2ded43d112077a9afd63087140897f16e6b8ec3bd607d7c51c7473b317b8f4d" Jan 28 17:28:55 crc kubenswrapper[4903]: I0128 17:28:55.408892 4903 scope.go:117] "RemoveContainer" containerID="56f6fb1e8789284a6648a017b2dd592f659a02667222a2bcf1491cb3fa204da0" Jan 28 17:28:56 crc kubenswrapper[4903]: I0128 17:28:56.613595 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:28:56 crc kubenswrapper[4903]: I0128 17:28:56.613926 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:28:56 crc kubenswrapper[4903]: I0128 17:28:56.613972 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:28:56 crc kubenswrapper[4903]: I0128 17:28:56.614690 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee28cc3262e4fea1138e33197444030f45138047131bb3fe3acbf3798be6fb9a"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:28:56 crc kubenswrapper[4903]: I0128 17:28:56.614741 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://ee28cc3262e4fea1138e33197444030f45138047131bb3fe3acbf3798be6fb9a" gracePeriod=600 Jan 28 17:28:57 crc kubenswrapper[4903]: I0128 17:28:57.026094 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="ee28cc3262e4fea1138e33197444030f45138047131bb3fe3acbf3798be6fb9a" exitCode=0 Jan 28 17:28:57 crc kubenswrapper[4903]: I0128 17:28:57.026389 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"ee28cc3262e4fea1138e33197444030f45138047131bb3fe3acbf3798be6fb9a"} Jan 28 17:28:57 crc kubenswrapper[4903]: I0128 17:28:57.026665 4903 scope.go:117] "RemoveContainer" containerID="84b2164f367436f4dc21b954b1cd7c8e5b3746e70617a5035a98d90b640e4fce" Jan 28 17:28:58 crc kubenswrapper[4903]: I0128 17:28:58.037686 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f"} Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.674876 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5485944877-6btzv"] Jan 28 17:29:00 crc kubenswrapper[4903]: E0128 17:29:00.676299 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.676320 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" Jan 28 17:29:00 crc kubenswrapper[4903]: E0128 17:29:00.676332 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="extract-utilities" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.676340 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="extract-utilities" Jan 28 17:29:00 crc kubenswrapper[4903]: E0128 17:29:00.676355 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="extract-content" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.676364 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="extract-content" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.676605 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a654e1-6894-483a-b7bf-f699ce05e2c7" containerName="registry-server" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.678440 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.683130 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-49d25" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.683347 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.683676 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.683820 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.694739 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5485944877-6btzv"] Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.735146 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.735431 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-log" containerID="cri-o://4ec1705d34eec6bd3c07a4c78cde506dc751984da4d0202d23843d6f55983b6f" gracePeriod=30 Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.736731 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-httpd" containerID="cri-o://c1fca57c96424f6bd69443030714c72aa7c068520efa1215ab657585fd64c242" gracePeriod=30 Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.807941 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c86cc98d5-mjwh2"] Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.810014 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.817335 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c86cc98d5-mjwh2"] Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.848670 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-config-data\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.848874 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-scripts\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.848902 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gzm\" (UniqueName: \"kubernetes.io/projected/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-kube-api-access-x5gzm\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.848950 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-logs\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.848984 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-horizon-secret-key\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.860660 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.860989 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-log" containerID="cri-o://328ff5d9baa13164bc2ccaca95ffb90e49a2af512f2d4bf932eaabf8d61010b4" gracePeriod=30 Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.861489 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-httpd" containerID="cri-o://2ccbcd4dbbd7109a4248162bb50d9eeff6410f3e480fce1f9ffffe6ab0c94753" gracePeriod=30 Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.950881 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-horizon-secret-key\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.950958 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-scripts\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951680 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-scripts\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951731 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-logs\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951760 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gzm\" (UniqueName: \"kubernetes.io/projected/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-kube-api-access-x5gzm\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951788 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-logs\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951826 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-horizon-secret-key\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951877 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-config-data\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951896 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcl4k\" (UniqueName: \"kubernetes.io/projected/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-kube-api-access-bcl4k\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951943 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-config-data\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.951959 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-scripts\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.952627 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-logs\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.953240 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-config-data\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.974766 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-horizon-secret-key\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:00 crc kubenswrapper[4903]: I0128 17:29:00.985149 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gzm\" (UniqueName: \"kubernetes.io/projected/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-kube-api-access-x5gzm\") pod \"horizon-5485944877-6btzv\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.028652 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.053817 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-config-data\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.053862 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcl4k\" (UniqueName: \"kubernetes.io/projected/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-kube-api-access-bcl4k\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.053916 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-scripts\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.053977 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-horizon-secret-key\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.054041 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-logs\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.054856 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-scripts\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.055443 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-config-data\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.056227 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-logs\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.059970 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-horizon-secret-key\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.073430 4903 generic.go:334] "Generic (PLEG): container finished" podID="a81328fb-7a19-4672-babf-bde845e899aa" containerID="4ec1705d34eec6bd3c07a4c78cde506dc751984da4d0202d23843d6f55983b6f" exitCode=143 Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.073511 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a81328fb-7a19-4672-babf-bde845e899aa","Type":"ContainerDied","Data":"4ec1705d34eec6bd3c07a4c78cde506dc751984da4d0202d23843d6f55983b6f"} Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.075696 4903 generic.go:334] "Generic (PLEG): container finished" podID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerID="328ff5d9baa13164bc2ccaca95ffb90e49a2af512f2d4bf932eaabf8d61010b4" exitCode=143 Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.075749 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"656574bf-0d38-4b5c-b93d-ea7d83da6ff6","Type":"ContainerDied","Data":"328ff5d9baa13164bc2ccaca95ffb90e49a2af512f2d4bf932eaabf8d61010b4"} Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.078034 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcl4k\" (UniqueName: \"kubernetes.io/projected/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-kube-api-access-bcl4k\") pod \"horizon-6c86cc98d5-mjwh2\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.137137 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.540002 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5485944877-6btzv"] Jan 28 17:29:01 crc kubenswrapper[4903]: I0128 17:29:01.739117 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c86cc98d5-mjwh2"] Jan 28 17:29:02 crc kubenswrapper[4903]: I0128 17:29:02.090472 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5485944877-6btzv" event={"ID":"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d","Type":"ContainerStarted","Data":"7e72b5493c9156916aef1b84daafaaafead32b9aa431479ff6a00fb92e4e24a2"} Jan 28 17:29:02 crc kubenswrapper[4903]: I0128 17:29:02.092685 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c86cc98d5-mjwh2" event={"ID":"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b","Type":"ContainerStarted","Data":"46da952da9359c22f5261fa2525ee3090ed3bf3c7cd7386381a1834dedc2a153"} Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.087165 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5485944877-6btzv"] Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.135206 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5745b988c6-drcnm"] Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.137217 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.148450 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.177332 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5745b988c6-drcnm"] Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.211245 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c86cc98d5-mjwh2"] Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.253989 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-logs\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.254065 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-config-data\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.254182 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-tls-certs\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.254269 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-combined-ca-bundle\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.254297 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c69ms\" (UniqueName: \"kubernetes.io/projected/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-kube-api-access-c69ms\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.254336 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-scripts\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.254386 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-secret-key\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.288237 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-795ddfcdd6-blwfr"] Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.291657 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.310238 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-795ddfcdd6-blwfr"] Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.355818 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-combined-ca-bundle\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.355925 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15165ec-e5e2-4795-a054-b0ab4c3956bd-logs\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.355958 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-config-data\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.355984 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-logs\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356006 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-config-data\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356032 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-tls-certs\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356079 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-tls-certs\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356497 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq2f6\" (UniqueName: \"kubernetes.io/projected/c15165ec-e5e2-4795-a054-b0ab4c3956bd-kube-api-access-kq2f6\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356721 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-combined-ca-bundle\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356745 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c69ms\" (UniqueName: \"kubernetes.io/projected/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-kube-api-access-c69ms\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356803 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-scripts\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356831 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-secret-key\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356859 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-scripts\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.356880 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-secret-key\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.357391 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-logs\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.358497 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-config-data\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.359255 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-scripts\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.368246 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-tls-certs\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.376095 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-secret-key\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.394969 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-combined-ca-bundle\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.410230 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c69ms\" (UniqueName: \"kubernetes.io/projected/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-kube-api-access-c69ms\") pod \"horizon-5745b988c6-drcnm\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.462370 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-combined-ca-bundle\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.462457 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15165ec-e5e2-4795-a054-b0ab4c3956bd-logs\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.462510 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-config-data\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.473155 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15165ec-e5e2-4795-a054-b0ab4c3956bd-logs\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.473255 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-tls-certs\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.474246 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-config-data\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.474506 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq2f6\" (UniqueName: \"kubernetes.io/projected/c15165ec-e5e2-4795-a054-b0ab4c3956bd-kube-api-access-kq2f6\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.478261 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-tls-certs\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.483312 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-scripts\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.483376 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-secret-key\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.484883 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.485793 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-scripts\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.487069 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-secret-key\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.508469 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-combined-ca-bundle\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.534147 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq2f6\" (UniqueName: \"kubernetes.io/projected/c15165ec-e5e2-4795-a054-b0ab4c3956bd-kube-api-access-kq2f6\") pod \"horizon-795ddfcdd6-blwfr\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:03 crc kubenswrapper[4903]: I0128 17:29:03.642069 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.043649 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5745b988c6-drcnm"] Jan 28 17:29:04 crc kubenswrapper[4903]: W0128 17:29:04.117881 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b8f32ff_e7d2_44fe_a1ea_8521fc20c5e2.slice/crio-a3cc39ff2ecada5116ee74e236d147255d98843cd2c605fb55cdeabd1d2cbd28 WatchSource:0}: Error finding container a3cc39ff2ecada5116ee74e236d147255d98843cd2c605fb55cdeabd1d2cbd28: Status 404 returned error can't find the container with id a3cc39ff2ecada5116ee74e236d147255d98843cd2c605fb55cdeabd1d2cbd28 Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.141351 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5745b988c6-drcnm" event={"ID":"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2","Type":"ContainerStarted","Data":"a3cc39ff2ecada5116ee74e236d147255d98843cd2c605fb55cdeabd1d2cbd28"} Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.147639 4903 generic.go:334] "Generic (PLEG): container finished" podID="a81328fb-7a19-4672-babf-bde845e899aa" containerID="c1fca57c96424f6bd69443030714c72aa7c068520efa1215ab657585fd64c242" exitCode=0 Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.147762 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a81328fb-7a19-4672-babf-bde845e899aa","Type":"ContainerDied","Data":"c1fca57c96424f6bd69443030714c72aa7c068520efa1215ab657585fd64c242"} Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.151738 4903 generic.go:334] "Generic (PLEG): container finished" podID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerID="2ccbcd4dbbd7109a4248162bb50d9eeff6410f3e480fce1f9ffffe6ab0c94753" exitCode=0 Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.151818 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"656574bf-0d38-4b5c-b93d-ea7d83da6ff6","Type":"ContainerDied","Data":"2ccbcd4dbbd7109a4248162bb50d9eeff6410f3e480fce1f9ffffe6ab0c94753"} Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.252234 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-795ddfcdd6-blwfr"] Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.695226 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.836295 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-scripts\") pod \"a81328fb-7a19-4672-babf-bde845e899aa\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.836359 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-logs\") pod \"a81328fb-7a19-4672-babf-bde845e899aa\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.836518 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-httpd-run\") pod \"a81328fb-7a19-4672-babf-bde845e899aa\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.836677 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krtn2\" (UniqueName: \"kubernetes.io/projected/a81328fb-7a19-4672-babf-bde845e899aa-kube-api-access-krtn2\") pod \"a81328fb-7a19-4672-babf-bde845e899aa\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.836710 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-public-tls-certs\") pod \"a81328fb-7a19-4672-babf-bde845e899aa\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.836745 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-combined-ca-bundle\") pod \"a81328fb-7a19-4672-babf-bde845e899aa\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.836810 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-config-data\") pod \"a81328fb-7a19-4672-babf-bde845e899aa\" (UID: \"a81328fb-7a19-4672-babf-bde845e899aa\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.843351 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-logs" (OuterVolumeSpecName: "logs") pod "a81328fb-7a19-4672-babf-bde845e899aa" (UID: "a81328fb-7a19-4672-babf-bde845e899aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.843607 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a81328fb-7a19-4672-babf-bde845e899aa" (UID: "a81328fb-7a19-4672-babf-bde845e899aa"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.844681 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81328fb-7a19-4672-babf-bde845e899aa-kube-api-access-krtn2" (OuterVolumeSpecName: "kube-api-access-krtn2") pod "a81328fb-7a19-4672-babf-bde845e899aa" (UID: "a81328fb-7a19-4672-babf-bde845e899aa"). InnerVolumeSpecName "kube-api-access-krtn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.849037 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-scripts" (OuterVolumeSpecName: "scripts") pod "a81328fb-7a19-4672-babf-bde845e899aa" (UID: "a81328fb-7a19-4672-babf-bde845e899aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.879712 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a81328fb-7a19-4672-babf-bde845e899aa" (UID: "a81328fb-7a19-4672-babf-bde845e899aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.885708 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.936650 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a81328fb-7a19-4672-babf-bde845e899aa" (UID: "a81328fb-7a19-4672-babf-bde845e899aa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.939359 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-config-data\") pod \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.939500 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-internal-tls-certs\") pod \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.939594 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-httpd-run\") pod \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.939628 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsrjd\" (UniqueName: \"kubernetes.io/projected/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-kube-api-access-gsrjd\") pod \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940090 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-combined-ca-bundle\") pod \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940127 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-scripts\") pod \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940164 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-logs\") pod \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\" (UID: \"656574bf-0d38-4b5c-b93d-ea7d83da6ff6\") " Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940779 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krtn2\" (UniqueName: \"kubernetes.io/projected/a81328fb-7a19-4672-babf-bde845e899aa-kube-api-access-krtn2\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940798 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940807 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940817 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940830 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.940838 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a81328fb-7a19-4672-babf-bde845e899aa-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.943119 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-logs" (OuterVolumeSpecName: "logs") pod "656574bf-0d38-4b5c-b93d-ea7d83da6ff6" (UID: "656574bf-0d38-4b5c-b93d-ea7d83da6ff6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.956821 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-scripts" (OuterVolumeSpecName: "scripts") pod "656574bf-0d38-4b5c-b93d-ea7d83da6ff6" (UID: "656574bf-0d38-4b5c-b93d-ea7d83da6ff6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.957196 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "656574bf-0d38-4b5c-b93d-ea7d83da6ff6" (UID: "656574bf-0d38-4b5c-b93d-ea7d83da6ff6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.963768 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-kube-api-access-gsrjd" (OuterVolumeSpecName: "kube-api-access-gsrjd") pod "656574bf-0d38-4b5c-b93d-ea7d83da6ff6" (UID: "656574bf-0d38-4b5c-b93d-ea7d83da6ff6"). InnerVolumeSpecName "kube-api-access-gsrjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.978957 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-config-data" (OuterVolumeSpecName: "config-data") pod "a81328fb-7a19-4672-babf-bde845e899aa" (UID: "a81328fb-7a19-4672-babf-bde845e899aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:04 crc kubenswrapper[4903]: I0128 17:29:04.998597 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "656574bf-0d38-4b5c-b93d-ea7d83da6ff6" (UID: "656574bf-0d38-4b5c-b93d-ea7d83da6ff6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.031752 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-config-data" (OuterVolumeSpecName: "config-data") pod "656574bf-0d38-4b5c-b93d-ea7d83da6ff6" (UID: "656574bf-0d38-4b5c-b93d-ea7d83da6ff6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.042956 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.042994 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.043007 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81328fb-7a19-4672-babf-bde845e899aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.043019 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.043030 4903 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.043047 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsrjd\" (UniqueName: \"kubernetes.io/projected/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-kube-api-access-gsrjd\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.043061 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.043410 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "656574bf-0d38-4b5c-b93d-ea7d83da6ff6" (UID: "656574bf-0d38-4b5c-b93d-ea7d83da6ff6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.151418 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/656574bf-0d38-4b5c-b93d-ea7d83da6ff6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.169174 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a81328fb-7a19-4672-babf-bde845e899aa","Type":"ContainerDied","Data":"6a60f35b023844968a9cd62be112b15e125f6a0c5d10d7cb06a062f6f68bbd89"} Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.169219 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.169298 4903 scope.go:117] "RemoveContainer" containerID="c1fca57c96424f6bd69443030714c72aa7c068520efa1215ab657585fd64c242" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.173626 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795ddfcdd6-blwfr" event={"ID":"c15165ec-e5e2-4795-a054-b0ab4c3956bd","Type":"ContainerStarted","Data":"3b827f44dc921a0195deb4e19013f57e445e87c10e6c295da61e3137fddfee01"} Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.177167 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"656574bf-0d38-4b5c-b93d-ea7d83da6ff6","Type":"ContainerDied","Data":"c4ebb508ccd38fc9e5eebb7830f8429c62e0432c734e411162e953df99b05465"} Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.177232 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.213054 4903 scope.go:117] "RemoveContainer" containerID="4ec1705d34eec6bd3c07a4c78cde506dc751984da4d0202d23843d6f55983b6f" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.215720 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.252957 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.262261 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.284571 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.298303 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: E0128 17:29:05.298974 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-log" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.298992 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-log" Jan 28 17:29:05 crc kubenswrapper[4903]: E0128 17:29:05.299011 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-httpd" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.299019 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-httpd" Jan 28 17:29:05 crc kubenswrapper[4903]: E0128 17:29:05.299044 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-httpd" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.299053 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-httpd" Jan 28 17:29:05 crc kubenswrapper[4903]: E0128 17:29:05.299067 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-log" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.299074 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-log" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.299344 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-log" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.299359 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-log" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.299376 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" containerName="glance-httpd" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.299389 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81328fb-7a19-4672-babf-bde845e899aa" containerName="glance-httpd" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.301390 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.303763 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-256p4" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.303905 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.304263 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.311464 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.313686 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.329202 4903 scope.go:117] "RemoveContainer" containerID="2ccbcd4dbbd7109a4248162bb50d9eeff6410f3e480fce1f9ffffe6ab0c94753" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.332326 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.335044 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.337153 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.340564 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.346358 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.359405 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.360882 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.360950 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ch4\" (UniqueName: \"kubernetes.io/projected/6c974883-4a49-4577-8d0e-4d39968a884e-kube-api-access-59ch4\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.360981 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361000 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361065 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361089 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8def06-83d8-4e66-8714-8ea1f5600a3a-logs\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361107 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361135 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7f45\" (UniqueName: \"kubernetes.io/projected/1f8def06-83d8-4e66-8714-8ea1f5600a3a-kube-api-access-b7f45\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361182 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c974883-4a49-4577-8d0e-4d39968a884e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361205 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361223 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c974883-4a49-4577-8d0e-4d39968a884e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361242 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f8def06-83d8-4e66-8714-8ea1f5600a3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.361262 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.405754 4903 scope.go:117] "RemoveContainer" containerID="328ff5d9baa13164bc2ccaca95ffb90e49a2af512f2d4bf932eaabf8d61010b4" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462655 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8def06-83d8-4e66-8714-8ea1f5600a3a-logs\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462743 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462804 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7f45\" (UniqueName: \"kubernetes.io/projected/1f8def06-83d8-4e66-8714-8ea1f5600a3a-kube-api-access-b7f45\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462882 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c974883-4a49-4577-8d0e-4d39968a884e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462918 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462937 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c974883-4a49-4577-8d0e-4d39968a884e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462955 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f8def06-83d8-4e66-8714-8ea1f5600a3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.462999 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.463055 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.463099 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.463126 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ch4\" (UniqueName: \"kubernetes.io/projected/6c974883-4a49-4577-8d0e-4d39968a884e-kube-api-access-59ch4\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.463148 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.463164 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.463236 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.465164 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f8def06-83d8-4e66-8714-8ea1f5600a3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.465817 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8def06-83d8-4e66-8714-8ea1f5600a3a-logs\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.467379 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c974883-4a49-4577-8d0e-4d39968a884e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.469296 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.469627 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c974883-4a49-4577-8d0e-4d39968a884e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.470145 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.471882 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.473447 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.473738 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.474402 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.475000 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c974883-4a49-4577-8d0e-4d39968a884e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.486148 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f8def06-83d8-4e66-8714-8ea1f5600a3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.490357 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7f45\" (UniqueName: \"kubernetes.io/projected/1f8def06-83d8-4e66-8714-8ea1f5600a3a-kube-api-access-b7f45\") pod \"glance-default-external-api-0\" (UID: \"1f8def06-83d8-4e66-8714-8ea1f5600a3a\") " pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.496043 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ch4\" (UniqueName: \"kubernetes.io/projected/6c974883-4a49-4577-8d0e-4d39968a884e-kube-api-access-59ch4\") pod \"glance-default-internal-api-0\" (UID: \"6c974883-4a49-4577-8d0e-4d39968a884e\") " pod="openstack/glance-default-internal-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.630320 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 17:29:05 crc kubenswrapper[4903]: I0128 17:29:05.658460 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:06 crc kubenswrapper[4903]: I0128 17:29:06.399659 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 17:29:06 crc kubenswrapper[4903]: W0128 17:29:06.403989 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8def06_83d8_4e66_8714_8ea1f5600a3a.slice/crio-9c529af4d44bb9218dee75e8b1e9401343c1467c7c41101f0482b81577f4aeca WatchSource:0}: Error finding container 9c529af4d44bb9218dee75e8b1e9401343c1467c7c41101f0482b81577f4aeca: Status 404 returned error can't find the container with id 9c529af4d44bb9218dee75e8b1e9401343c1467c7c41101f0482b81577f4aeca Jan 28 17:29:06 crc kubenswrapper[4903]: I0128 17:29:06.434436 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656574bf-0d38-4b5c-b93d-ea7d83da6ff6" path="/var/lib/kubelet/pods/656574bf-0d38-4b5c-b93d-ea7d83da6ff6/volumes" Jan 28 17:29:06 crc kubenswrapper[4903]: I0128 17:29:06.435228 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81328fb-7a19-4672-babf-bde845e899aa" path="/var/lib/kubelet/pods/a81328fb-7a19-4672-babf-bde845e899aa/volumes" Jan 28 17:29:06 crc kubenswrapper[4903]: I0128 17:29:06.544804 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 17:29:07 crc kubenswrapper[4903]: I0128 17:29:07.228744 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c974883-4a49-4577-8d0e-4d39968a884e","Type":"ContainerStarted","Data":"701f63a1b31f19149670fde8d0431a0bcdbcdb445c7d178c4c5b469731210844"} Jan 28 17:29:07 crc kubenswrapper[4903]: I0128 17:29:07.238448 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f8def06-83d8-4e66-8714-8ea1f5600a3a","Type":"ContainerStarted","Data":"9c529af4d44bb9218dee75e8b1e9401343c1467c7c41101f0482b81577f4aeca"} Jan 28 17:29:08 crc kubenswrapper[4903]: I0128 17:29:08.264796 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c974883-4a49-4577-8d0e-4d39968a884e","Type":"ContainerStarted","Data":"36f449c0f3589d40e4229346b9f0188a0a850bcd92189c741df05ccde9222451"} Jan 28 17:29:08 crc kubenswrapper[4903]: I0128 17:29:08.269130 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f8def06-83d8-4e66-8714-8ea1f5600a3a","Type":"ContainerStarted","Data":"fe42e4814318229ac3f42bfe77b0809429d5aeefdb7a5df21147634c0ecb5a26"} Jan 28 17:29:13 crc kubenswrapper[4903]: I0128 17:29:13.326413 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c86cc98d5-mjwh2" event={"ID":"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b","Type":"ContainerStarted","Data":"7f2061974f52b46ef4ce2b511e1859dcfff7fc311fe78187cf2b26a3d05ed5e7"} Jan 28 17:29:13 crc kubenswrapper[4903]: I0128 17:29:13.329650 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795ddfcdd6-blwfr" event={"ID":"c15165ec-e5e2-4795-a054-b0ab4c3956bd","Type":"ContainerStarted","Data":"9464b679a38844c837cd42a991491d3f7326090371f88a02dcfa71174cdb3d87"} Jan 28 17:29:13 crc kubenswrapper[4903]: I0128 17:29:13.332096 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5485944877-6btzv" event={"ID":"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d","Type":"ContainerStarted","Data":"2ab634d330a46b7458876ce429dfa5cb89e766033d799f4e8b29476787298ce7"} Jan 28 17:29:13 crc kubenswrapper[4903]: I0128 17:29:13.333730 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5745b988c6-drcnm" event={"ID":"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2","Type":"ContainerStarted","Data":"70cc5e47a89955d41355980aacf503b1d035e53554014d034826b73311b7034f"} Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.346553 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5745b988c6-drcnm" event={"ID":"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2","Type":"ContainerStarted","Data":"c0b3428f544bc7fe34a4fbcf0168b163ab8146968f3917d846940773b93f133e"} Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.350819 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c86cc98d5-mjwh2" event={"ID":"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b","Type":"ContainerStarted","Data":"b7115a245bfd0a2fe12e637f2404ceca78e4a13513f659a16cafe9084248e5b6"} Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.353394 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c86cc98d5-mjwh2" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon-log" containerID="cri-o://7f2061974f52b46ef4ce2b511e1859dcfff7fc311fe78187cf2b26a3d05ed5e7" gracePeriod=30 Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.353710 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c86cc98d5-mjwh2" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon" containerID="cri-o://b7115a245bfd0a2fe12e637f2404ceca78e4a13513f659a16cafe9084248e5b6" gracePeriod=30 Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.356948 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f8def06-83d8-4e66-8714-8ea1f5600a3a","Type":"ContainerStarted","Data":"1742ab606b4aae7d08c6d191d7ebebff7d0af43ec824ad22652a2f14a8b13522"} Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.359297 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795ddfcdd6-blwfr" event={"ID":"c15165ec-e5e2-4795-a054-b0ab4c3956bd","Type":"ContainerStarted","Data":"91d199ea977810d2e63573cd3daf091df34f6e04e74c0670e3acde3d4d19088b"} Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.369025 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5485944877-6btzv" event={"ID":"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d","Type":"ContainerStarted","Data":"4807e2017b9de4e6a997b3e1ee816f3353be3196237f31f4703cd4f3065f5cdc"} Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.369044 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5485944877-6btzv" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon-log" containerID="cri-o://2ab634d330a46b7458876ce429dfa5cb89e766033d799f4e8b29476787298ce7" gracePeriod=30 Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.369088 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5485944877-6btzv" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon" containerID="cri-o://4807e2017b9de4e6a997b3e1ee816f3353be3196237f31f4703cd4f3065f5cdc" gracePeriod=30 Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.384477 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5745b988c6-drcnm" podStartSLOduration=2.582795217 podStartE2EDuration="11.384456777s" podCreationTimestamp="2026-01-28 17:29:03 +0000 UTC" firstStartedPulling="2026-01-28 17:29:04.121592041 +0000 UTC m=+6216.397563552" lastFinishedPulling="2026-01-28 17:29:12.923253601 +0000 UTC m=+6225.199225112" observedRunningTime="2026-01-28 17:29:14.374684972 +0000 UTC m=+6226.650656503" watchObservedRunningTime="2026-01-28 17:29:14.384456777 +0000 UTC m=+6226.660428278" Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.389276 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c974883-4a49-4577-8d0e-4d39968a884e","Type":"ContainerStarted","Data":"0c0c74caca1fc2a55971012620e927640a5fb843be6766f5a08b5e51d25f3a3c"} Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.401114 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-795ddfcdd6-blwfr" podStartSLOduration=2.8022930109999997 podStartE2EDuration="11.401092848s" podCreationTimestamp="2026-01-28 17:29:03 +0000 UTC" firstStartedPulling="2026-01-28 17:29:04.258217207 +0000 UTC m=+6216.534188718" lastFinishedPulling="2026-01-28 17:29:12.857017044 +0000 UTC m=+6225.132988555" observedRunningTime="2026-01-28 17:29:14.397598213 +0000 UTC m=+6226.673569744" watchObservedRunningTime="2026-01-28 17:29:14.401092848 +0000 UTC m=+6226.677064359" Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.445061 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.44503513 podStartE2EDuration="9.44503513s" podCreationTimestamp="2026-01-28 17:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:29:14.434076773 +0000 UTC m=+6226.710048274" watchObservedRunningTime="2026-01-28 17:29:14.44503513 +0000 UTC m=+6226.721006641" Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.459240 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c86cc98d5-mjwh2" podStartSLOduration=3.280869817 podStartE2EDuration="14.459218925s" podCreationTimestamp="2026-01-28 17:29:00 +0000 UTC" firstStartedPulling="2026-01-28 17:29:01.745067927 +0000 UTC m=+6214.021039438" lastFinishedPulling="2026-01-28 17:29:12.923417035 +0000 UTC m=+6225.199388546" observedRunningTime="2026-01-28 17:29:14.452518473 +0000 UTC m=+6226.728489984" watchObservedRunningTime="2026-01-28 17:29:14.459218925 +0000 UTC m=+6226.735190436" Jan 28 17:29:14 crc kubenswrapper[4903]: I0128 17:29:14.490273 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5485944877-6btzv" podStartSLOduration=3.183717521 podStartE2EDuration="14.490245916s" podCreationTimestamp="2026-01-28 17:29:00 +0000 UTC" firstStartedPulling="2026-01-28 17:29:01.550486879 +0000 UTC m=+6213.826458390" lastFinishedPulling="2026-01-28 17:29:12.857015274 +0000 UTC m=+6225.132986785" observedRunningTime="2026-01-28 17:29:14.486026652 +0000 UTC m=+6226.761998163" watchObservedRunningTime="2026-01-28 17:29:14.490245916 +0000 UTC m=+6226.766217427" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.631333 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.631761 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.658850 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.658900 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.666287 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.674136 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.692611 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.692592641 podStartE2EDuration="10.692592641s" podCreationTimestamp="2026-01-28 17:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:29:14.514744251 +0000 UTC m=+6226.790715782" watchObservedRunningTime="2026-01-28 17:29:15.692592641 +0000 UTC m=+6227.968564152" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.708119 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:15 crc kubenswrapper[4903]: I0128 17:29:15.722920 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:16 crc kubenswrapper[4903]: I0128 17:29:16.408578 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:16 crc kubenswrapper[4903]: I0128 17:29:16.408871 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 17:29:16 crc kubenswrapper[4903]: I0128 17:29:16.408889 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:16 crc kubenswrapper[4903]: I0128 17:29:16.408916 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 17:29:18 crc kubenswrapper[4903]: I0128 17:29:18.392127 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 17:29:18 crc kubenswrapper[4903]: I0128 17:29:18.422994 4903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 17:29:18 crc kubenswrapper[4903]: I0128 17:29:18.466681 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:18 crc kubenswrapper[4903]: I0128 17:29:18.470832 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 17:29:20 crc kubenswrapper[4903]: I0128 17:29:20.433984 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 17:29:21 crc kubenswrapper[4903]: I0128 17:29:21.030055 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:21 crc kubenswrapper[4903]: I0128 17:29:21.137913 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:23 crc kubenswrapper[4903]: I0128 17:29:23.486357 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:23 crc kubenswrapper[4903]: I0128 17:29:23.486953 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:23 crc kubenswrapper[4903]: I0128 17:29:23.488980 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5745b988c6-drcnm" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.109:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8443: connect: connection refused" Jan 28 17:29:23 crc kubenswrapper[4903]: I0128 17:29:23.642891 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:23 crc kubenswrapper[4903]: I0128 17:29:23.642978 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:23 crc kubenswrapper[4903]: I0128 17:29:23.646316 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 28 17:29:33 crc kubenswrapper[4903]: I0128 17:29:33.487002 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5745b988c6-drcnm" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.109:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8443: connect: connection refused" Jan 28 17:29:33 crc kubenswrapper[4903]: I0128 17:29:33.643482 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 28 17:29:41 crc kubenswrapper[4903]: I0128 17:29:41.040293 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kptr5"] Jan 28 17:29:41 crc kubenswrapper[4903]: I0128 17:29:41.049810 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b7e1-account-create-update-tqrft"] Jan 28 17:29:41 crc kubenswrapper[4903]: I0128 17:29:41.062718 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kptr5"] Jan 28 17:29:41 crc kubenswrapper[4903]: I0128 17:29:41.071154 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b7e1-account-create-update-tqrft"] Jan 28 17:29:42 crc kubenswrapper[4903]: I0128 17:29:42.427784 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ecb8d37-b0da-4a74-9ddf-ea994c2a8822" path="/var/lib/kubelet/pods/2ecb8d37-b0da-4a74-9ddf-ea994c2a8822/volumes" Jan 28 17:29:42 crc kubenswrapper[4903]: I0128 17:29:42.428837 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ca0ad53-779e-47b8-a2b1-89909a9e4660" path="/var/lib/kubelet/pods/4ca0ad53-779e-47b8-a2b1-89909a9e4660/volumes" Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.698930 4903 generic.go:334] "Generic (PLEG): container finished" podID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerID="b7115a245bfd0a2fe12e637f2404ceca78e4a13513f659a16cafe9084248e5b6" exitCode=137 Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.699522 4903 generic.go:334] "Generic (PLEG): container finished" podID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerID="7f2061974f52b46ef4ce2b511e1859dcfff7fc311fe78187cf2b26a3d05ed5e7" exitCode=137 Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.699614 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c86cc98d5-mjwh2" event={"ID":"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b","Type":"ContainerDied","Data":"b7115a245bfd0a2fe12e637f2404ceca78e4a13513f659a16cafe9084248e5b6"} Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.699642 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c86cc98d5-mjwh2" event={"ID":"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b","Type":"ContainerDied","Data":"7f2061974f52b46ef4ce2b511e1859dcfff7fc311fe78187cf2b26a3d05ed5e7"} Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.701430 4903 generic.go:334] "Generic (PLEG): container finished" podID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerID="4807e2017b9de4e6a997b3e1ee816f3353be3196237f31f4703cd4f3065f5cdc" exitCode=137 Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.701465 4903 generic.go:334] "Generic (PLEG): container finished" podID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerID="2ab634d330a46b7458876ce429dfa5cb89e766033d799f4e8b29476787298ce7" exitCode=137 Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.701483 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5485944877-6btzv" event={"ID":"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d","Type":"ContainerDied","Data":"4807e2017b9de4e6a997b3e1ee816f3353be3196237f31f4703cd4f3065f5cdc"} Jan 28 17:29:44 crc kubenswrapper[4903]: I0128 17:29:44.701503 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5485944877-6btzv" event={"ID":"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d","Type":"ContainerDied","Data":"2ab634d330a46b7458876ce429dfa5cb89e766033d799f4e8b29476787298ce7"} Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.077809 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.087477 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.273775 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-config-data\") pod \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.273883 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcl4k\" (UniqueName: \"kubernetes.io/projected/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-kube-api-access-bcl4k\") pod \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274173 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-scripts\") pod \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274226 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-scripts\") pod \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274286 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-logs\") pod \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274326 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-logs\") pod \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274373 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-horizon-secret-key\") pod \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274421 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-horizon-secret-key\") pod \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274496 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-config-data\") pod \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\" (UID: \"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.274574 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5gzm\" (UniqueName: \"kubernetes.io/projected/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-kube-api-access-x5gzm\") pod \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\" (UID: \"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d\") " Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.276849 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-logs" (OuterVolumeSpecName: "logs") pod "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" (UID: "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.277562 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-logs" (OuterVolumeSpecName: "logs") pod "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" (UID: "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.282135 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" (UID: "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.282751 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" (UID: "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.283438 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-kube-api-access-bcl4k" (OuterVolumeSpecName: "kube-api-access-bcl4k") pod "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" (UID: "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b"). InnerVolumeSpecName "kube-api-access-bcl4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.283840 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-kube-api-access-x5gzm" (OuterVolumeSpecName: "kube-api-access-x5gzm") pod "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" (UID: "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d"). InnerVolumeSpecName "kube-api-access-x5gzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.304600 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-config-data" (OuterVolumeSpecName: "config-data") pod "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" (UID: "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.306839 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-scripts" (OuterVolumeSpecName: "scripts") pod "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" (UID: "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.307463 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-scripts" (OuterVolumeSpecName: "scripts") pod "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" (UID: "c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.312605 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-config-data" (OuterVolumeSpecName: "config-data") pod "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" (UID: "dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377561 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377628 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377644 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377652 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377665 4903 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377674 4903 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377683 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377691 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5gzm\" (UniqueName: \"kubernetes.io/projected/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-kube-api-access-x5gzm\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377700 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.377709 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcl4k\" (UniqueName: \"kubernetes.io/projected/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b-kube-api-access-bcl4k\") on node \"crc\" DevicePath \"\"" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.670560 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.712979 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5485944877-6btzv" event={"ID":"c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d","Type":"ContainerDied","Data":"7e72b5493c9156916aef1b84daafaaafead32b9aa431479ff6a00fb92e4e24a2"} Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.713038 4903 scope.go:117] "RemoveContainer" containerID="4807e2017b9de4e6a997b3e1ee816f3353be3196237f31f4703cd4f3065f5cdc" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.713598 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5485944877-6btzv" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.715629 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c86cc98d5-mjwh2" event={"ID":"dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b","Type":"ContainerDied","Data":"46da952da9359c22f5261fa2525ee3090ed3bf3c7cd7386381a1834dedc2a153"} Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.715696 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c86cc98d5-mjwh2" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.781872 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c86cc98d5-mjwh2"] Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.825894 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c86cc98d5-mjwh2"] Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.845286 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5485944877-6btzv"] Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.875199 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5485944877-6btzv"] Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.904709 4903 scope.go:117] "RemoveContainer" containerID="2ab634d330a46b7458876ce429dfa5cb89e766033d799f4e8b29476787298ce7" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.914397 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:45 crc kubenswrapper[4903]: I0128 17:29:45.971944 4903 scope.go:117] "RemoveContainer" containerID="b7115a245bfd0a2fe12e637f2404ceca78e4a13513f659a16cafe9084248e5b6" Jan 28 17:29:46 crc kubenswrapper[4903]: I0128 17:29:46.187239 4903 scope.go:117] "RemoveContainer" containerID="7f2061974f52b46ef4ce2b511e1859dcfff7fc311fe78187cf2b26a3d05ed5e7" Jan 28 17:29:46 crc kubenswrapper[4903]: I0128 17:29:46.430121 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" path="/var/lib/kubelet/pods/c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d/volumes" Jan 28 17:29:46 crc kubenswrapper[4903]: I0128 17:29:46.431244 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" path="/var/lib/kubelet/pods/dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b/volumes" Jan 28 17:29:47 crc kubenswrapper[4903]: I0128 17:29:47.639324 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:29:47 crc kubenswrapper[4903]: I0128 17:29:47.765269 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:29:47 crc kubenswrapper[4903]: I0128 17:29:47.841987 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5745b988c6-drcnm"] Jan 28 17:29:47 crc kubenswrapper[4903]: I0128 17:29:47.842223 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5745b988c6-drcnm" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon-log" containerID="cri-o://70cc5e47a89955d41355980aacf503b1d035e53554014d034826b73311b7034f" gracePeriod=30 Jan 28 17:29:47 crc kubenswrapper[4903]: I0128 17:29:47.842313 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5745b988c6-drcnm" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" containerID="cri-o://c0b3428f544bc7fe34a4fbcf0168b163ab8146968f3917d846940773b93f133e" gracePeriod=30 Jan 28 17:29:49 crc kubenswrapper[4903]: I0128 17:29:49.033217 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-n5klz"] Jan 28 17:29:49 crc kubenswrapper[4903]: I0128 17:29:49.044812 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-n5klz"] Jan 28 17:29:50 crc kubenswrapper[4903]: I0128 17:29:50.423881 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e5c64f-8064-4fed-9bec-197f34e62bfb" path="/var/lib/kubelet/pods/61e5c64f-8064-4fed-9bec-197f34e62bfb/volumes" Jan 28 17:29:51 crc kubenswrapper[4903]: I0128 17:29:51.779267 4903 generic.go:334] "Generic (PLEG): container finished" podID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerID="c0b3428f544bc7fe34a4fbcf0168b163ab8146968f3917d846940773b93f133e" exitCode=0 Jan 28 17:29:51 crc kubenswrapper[4903]: I0128 17:29:51.779355 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5745b988c6-drcnm" event={"ID":"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2","Type":"ContainerDied","Data":"c0b3428f544bc7fe34a4fbcf0168b163ab8146968f3917d846940773b93f133e"} Jan 28 17:29:53 crc kubenswrapper[4903]: I0128 17:29:53.487075 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5745b988c6-drcnm" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.109:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8443: connect: connection refused" Jan 28 17:29:55 crc kubenswrapper[4903]: I0128 17:29:55.612874 4903 scope.go:117] "RemoveContainer" containerID="a1c9876ce5d33ec37ab1c02a6eb02594835306e4ccbdcd81e3ac9ba7609297a9" Jan 28 17:29:55 crc kubenswrapper[4903]: I0128 17:29:55.640903 4903 scope.go:117] "RemoveContainer" containerID="081478be03050bcbf27057068a6e1ded2bd5896bf5def0eb768518af7caf7966" Jan 28 17:29:55 crc kubenswrapper[4903]: I0128 17:29:55.693202 4903 scope.go:117] "RemoveContainer" containerID="7e9f6d5affbbec650b8b75ce3a951dbc0b1a14767b5c346fe669313a196732e8" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.150393 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4"] Jan 28 17:30:00 crc kubenswrapper[4903]: E0128 17:30:00.151385 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151399 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon" Jan 28 17:30:00 crc kubenswrapper[4903]: E0128 17:30:00.151420 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon-log" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151426 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon-log" Jan 28 17:30:00 crc kubenswrapper[4903]: E0128 17:30:00.151451 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151460 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon" Jan 28 17:30:00 crc kubenswrapper[4903]: E0128 17:30:00.151480 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon-log" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151491 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon-log" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151731 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon-log" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151754 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc51890a-72ba-4dfa-9f40-1ac7b0e5ad2b" containerName="horizon" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151774 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.151796 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ffe52a-737f-4ff8-b4cb-8cdd0ae8271d" containerName="horizon-log" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.152606 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.155761 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.157870 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.161521 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4"] Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.226076 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/287bf0f6-bb05-41a4-88c3-4389e0b19e74-secret-volume\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.226138 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8j97\" (UniqueName: \"kubernetes.io/projected/287bf0f6-bb05-41a4-88c3-4389e0b19e74-kube-api-access-m8j97\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.226382 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/287bf0f6-bb05-41a4-88c3-4389e0b19e74-config-volume\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.327893 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/287bf0f6-bb05-41a4-88c3-4389e0b19e74-secret-volume\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.327948 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8j97\" (UniqueName: \"kubernetes.io/projected/287bf0f6-bb05-41a4-88c3-4389e0b19e74-kube-api-access-m8j97\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.328040 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/287bf0f6-bb05-41a4-88c3-4389e0b19e74-config-volume\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.329391 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/287bf0f6-bb05-41a4-88c3-4389e0b19e74-config-volume\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.337042 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/287bf0f6-bb05-41a4-88c3-4389e0b19e74-secret-volume\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.353046 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8j97\" (UniqueName: \"kubernetes.io/projected/287bf0f6-bb05-41a4-88c3-4389e0b19e74-kube-api-access-m8j97\") pod \"collect-profiles-29493690-7w2q4\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:00 crc kubenswrapper[4903]: I0128 17:30:00.477065 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:01 crc kubenswrapper[4903]: I0128 17:30:01.026994 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4"] Jan 28 17:30:01 crc kubenswrapper[4903]: I0128 17:30:01.893520 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" event={"ID":"287bf0f6-bb05-41a4-88c3-4389e0b19e74","Type":"ContainerStarted","Data":"3b45e4578babc6f44f127323f66547998ae6abe955b093b94411694d4d0bac07"} Jan 28 17:30:01 crc kubenswrapper[4903]: I0128 17:30:01.893902 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" event={"ID":"287bf0f6-bb05-41a4-88c3-4389e0b19e74","Type":"ContainerStarted","Data":"0f14a7e1a0bf39bd3c729e0ea6525b10230df98b0916390c08aa648bdd5503f0"} Jan 28 17:30:01 crc kubenswrapper[4903]: I0128 17:30:01.914291 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" podStartSLOduration=1.914269985 podStartE2EDuration="1.914269985s" podCreationTimestamp="2026-01-28 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:30:01.910483062 +0000 UTC m=+6274.186454573" watchObservedRunningTime="2026-01-28 17:30:01.914269985 +0000 UTC m=+6274.190241496" Jan 28 17:30:02 crc kubenswrapper[4903]: I0128 17:30:02.904478 4903 generic.go:334] "Generic (PLEG): container finished" podID="287bf0f6-bb05-41a4-88c3-4389e0b19e74" containerID="3b45e4578babc6f44f127323f66547998ae6abe955b093b94411694d4d0bac07" exitCode=0 Jan 28 17:30:02 crc kubenswrapper[4903]: I0128 17:30:02.904580 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" event={"ID":"287bf0f6-bb05-41a4-88c3-4389e0b19e74","Type":"ContainerDied","Data":"3b45e4578babc6f44f127323f66547998ae6abe955b093b94411694d4d0bac07"} Jan 28 17:30:03 crc kubenswrapper[4903]: I0128 17:30:03.487008 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5745b988c6-drcnm" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.109:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8443: connect: connection refused" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.274605 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.318367 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/287bf0f6-bb05-41a4-88c3-4389e0b19e74-secret-volume\") pod \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.318552 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8j97\" (UniqueName: \"kubernetes.io/projected/287bf0f6-bb05-41a4-88c3-4389e0b19e74-kube-api-access-m8j97\") pod \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.318740 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/287bf0f6-bb05-41a4-88c3-4389e0b19e74-config-volume\") pod \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\" (UID: \"287bf0f6-bb05-41a4-88c3-4389e0b19e74\") " Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.319255 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/287bf0f6-bb05-41a4-88c3-4389e0b19e74-config-volume" (OuterVolumeSpecName: "config-volume") pod "287bf0f6-bb05-41a4-88c3-4389e0b19e74" (UID: "287bf0f6-bb05-41a4-88c3-4389e0b19e74"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.324860 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/287bf0f6-bb05-41a4-88c3-4389e0b19e74-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "287bf0f6-bb05-41a4-88c3-4389e0b19e74" (UID: "287bf0f6-bb05-41a4-88c3-4389e0b19e74"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.324874 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287bf0f6-bb05-41a4-88c3-4389e0b19e74-kube-api-access-m8j97" (OuterVolumeSpecName: "kube-api-access-m8j97") pod "287bf0f6-bb05-41a4-88c3-4389e0b19e74" (UID: "287bf0f6-bb05-41a4-88c3-4389e0b19e74"). InnerVolumeSpecName "kube-api-access-m8j97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.420868 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8j97\" (UniqueName: \"kubernetes.io/projected/287bf0f6-bb05-41a4-88c3-4389e0b19e74-kube-api-access-m8j97\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.420912 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/287bf0f6-bb05-41a4-88c3-4389e0b19e74-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.420925 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/287bf0f6-bb05-41a4-88c3-4389e0b19e74-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.929597 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" event={"ID":"287bf0f6-bb05-41a4-88c3-4389e0b19e74","Type":"ContainerDied","Data":"0f14a7e1a0bf39bd3c729e0ea6525b10230df98b0916390c08aa648bdd5503f0"} Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.929650 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f14a7e1a0bf39bd3c729e0ea6525b10230df98b0916390c08aa648bdd5503f0" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.929683 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4" Jan 28 17:30:04 crc kubenswrapper[4903]: I0128 17:30:04.997970 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh"] Jan 28 17:30:05 crc kubenswrapper[4903]: I0128 17:30:05.008360 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493645-qntzh"] Jan 28 17:30:06 crc kubenswrapper[4903]: I0128 17:30:06.425388 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0e68f5-3463-42a7-8887-c6735d6cb2dc" path="/var/lib/kubelet/pods/df0e68f5-3463-42a7-8887-c6735d6cb2dc/volumes" Jan 28 17:30:13 crc kubenswrapper[4903]: I0128 17:30:13.486867 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5745b988c6-drcnm" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.109:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8443: connect: connection refused" Jan 28 17:30:13 crc kubenswrapper[4903]: I0128 17:30:13.487628 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.047179 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5745b988c6-drcnm" event={"ID":"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2","Type":"ContainerDied","Data":"70cc5e47a89955d41355980aacf503b1d035e53554014d034826b73311b7034f"} Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.047189 4903 generic.go:334] "Generic (PLEG): container finished" podID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerID="70cc5e47a89955d41355980aacf503b1d035e53554014d034826b73311b7034f" exitCode=137 Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.238452 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.327023 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c69ms\" (UniqueName: \"kubernetes.io/projected/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-kube-api-access-c69ms\") pod \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.327120 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-secret-key\") pod \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.327187 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-config-data\") pod \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.327220 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-combined-ca-bundle\") pod \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.327291 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-logs\") pod \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.327320 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-tls-certs\") pod \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.327394 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-scripts\") pod \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\" (UID: \"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2\") " Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.328313 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-logs" (OuterVolumeSpecName: "logs") pod "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" (UID: "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.333301 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" (UID: "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.333543 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-kube-api-access-c69ms" (OuterVolumeSpecName: "kube-api-access-c69ms") pod "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" (UID: "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2"). InnerVolumeSpecName "kube-api-access-c69ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.355677 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-config-data" (OuterVolumeSpecName: "config-data") pod "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" (UID: "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.358901 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" (UID: "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.370370 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-scripts" (OuterVolumeSpecName: "scripts") pod "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" (UID: "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.387652 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" (UID: "6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.430019 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.430146 4903 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.430225 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.430289 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c69ms\" (UniqueName: \"kubernetes.io/projected/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-kube-api-access-c69ms\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.430344 4903 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.430408 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:18 crc kubenswrapper[4903]: I0128 17:30:18.430467 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:19 crc kubenswrapper[4903]: I0128 17:30:19.059710 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5745b988c6-drcnm" event={"ID":"6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2","Type":"ContainerDied","Data":"a3cc39ff2ecada5116ee74e236d147255d98843cd2c605fb55cdeabd1d2cbd28"} Jan 28 17:30:19 crc kubenswrapper[4903]: I0128 17:30:19.059764 4903 scope.go:117] "RemoveContainer" containerID="c0b3428f544bc7fe34a4fbcf0168b163ab8146968f3917d846940773b93f133e" Jan 28 17:30:19 crc kubenswrapper[4903]: I0128 17:30:19.060695 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5745b988c6-drcnm" Jan 28 17:30:19 crc kubenswrapper[4903]: I0128 17:30:19.093261 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5745b988c6-drcnm"] Jan 28 17:30:19 crc kubenswrapper[4903]: I0128 17:30:19.103140 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5745b988c6-drcnm"] Jan 28 17:30:19 crc kubenswrapper[4903]: I0128 17:30:19.229455 4903 scope.go:117] "RemoveContainer" containerID="70cc5e47a89955d41355980aacf503b1d035e53554014d034826b73311b7034f" Jan 28 17:30:20 crc kubenswrapper[4903]: I0128 17:30:20.427997 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" path="/var/lib/kubelet/pods/6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2/volumes" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.145882 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6986fc9fc8-xw4pb"] Jan 28 17:30:53 crc kubenswrapper[4903]: E0128 17:30:53.146805 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.146818 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" Jan 28 17:30:53 crc kubenswrapper[4903]: E0128 17:30:53.146835 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon-log" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.146842 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon-log" Jan 28 17:30:53 crc kubenswrapper[4903]: E0128 17:30:53.146858 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287bf0f6-bb05-41a4-88c3-4389e0b19e74" containerName="collect-profiles" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.146864 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="287bf0f6-bb05-41a4-88c3-4389e0b19e74" containerName="collect-profiles" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.147058 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon-log" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.147075 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8f32ff-e7d2-44fe-a1ea-8521fc20c5e2" containerName="horizon" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.147087 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="287bf0f6-bb05-41a4-88c3-4389e0b19e74" containerName="collect-profiles" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.148136 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.165140 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6986fc9fc8-xw4pb"] Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.270984 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-combined-ca-bundle\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.271310 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s27vs\" (UniqueName: \"kubernetes.io/projected/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-kube-api-access-s27vs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.271417 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-scripts\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.271677 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-logs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.271915 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-config-data\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.271974 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-horizon-tls-certs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.272241 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-horizon-secret-key\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.375320 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-logs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.375518 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-config-data\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.375679 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-horizon-tls-certs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.375784 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-horizon-secret-key\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.375832 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-combined-ca-bundle\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.375901 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s27vs\" (UniqueName: \"kubernetes.io/projected/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-kube-api-access-s27vs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.375935 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-scripts\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.376941 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-logs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.378269 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-scripts\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.379908 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-config-data\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.395400 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-horizon-secret-key\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.395591 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-horizon-tls-certs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.396758 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s27vs\" (UniqueName: \"kubernetes.io/projected/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-kube-api-access-s27vs\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.401893 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8fb49c3-b804-40e7-8866-93e7f4ff9d39-combined-ca-bundle\") pod \"horizon-6986fc9fc8-xw4pb\" (UID: \"b8fb49c3-b804-40e7-8866-93e7f4ff9d39\") " pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.470291 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:30:53 crc kubenswrapper[4903]: I0128 17:30:53.976251 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6986fc9fc8-xw4pb"] Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.384780 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6986fc9fc8-xw4pb" event={"ID":"b8fb49c3-b804-40e7-8866-93e7f4ff9d39","Type":"ContainerStarted","Data":"e6d1fdef1a843c1e4442bdcd926c022352814636b68d70aff7778956591a2d43"} Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.385116 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6986fc9fc8-xw4pb" event={"ID":"b8fb49c3-b804-40e7-8866-93e7f4ff9d39","Type":"ContainerStarted","Data":"ea6d231fbb3fe5de0ded6a71985cdebff0eb368df3530a5f6ebef8e1a8eef242"} Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.385129 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6986fc9fc8-xw4pb" event={"ID":"b8fb49c3-b804-40e7-8866-93e7f4ff9d39","Type":"ContainerStarted","Data":"d04168029b3fb252a7a21f245ec0ad1ea13ff3e5cbf1ebce44ddbaa91e4ba6a4"} Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.407421 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6986fc9fc8-xw4pb" podStartSLOduration=1.407405585 podStartE2EDuration="1.407405585s" podCreationTimestamp="2026-01-28 17:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:30:54.404128368 +0000 UTC m=+6326.680099889" watchObservedRunningTime="2026-01-28 17:30:54.407405585 +0000 UTC m=+6326.683377096" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.667299 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-l2shz"] Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.668845 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-l2shz" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.693225 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-l2shz"] Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.768065 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-85a4-account-create-update-lgq85"] Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.769685 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.774909 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.778678 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-85a4-account-create-update-lgq85"] Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.803278 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24798df-5487-4f50-8a20-8c1890f588ed-operator-scripts\") pod \"heat-db-create-l2shz\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " pod="openstack/heat-db-create-l2shz" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.803341 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7nv\" (UniqueName: \"kubernetes.io/projected/e24798df-5487-4f50-8a20-8c1890f588ed-kube-api-access-sb7nv\") pod \"heat-db-create-l2shz\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " pod="openstack/heat-db-create-l2shz" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.905919 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e160-d43c-4d46-b4b1-77e53a64e845-operator-scripts\") pod \"heat-85a4-account-create-update-lgq85\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.905991 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2j8\" (UniqueName: \"kubernetes.io/projected/a309e160-d43c-4d46-b4b1-77e53a64e845-kube-api-access-4q2j8\") pod \"heat-85a4-account-create-update-lgq85\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.906124 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24798df-5487-4f50-8a20-8c1890f588ed-operator-scripts\") pod \"heat-db-create-l2shz\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " pod="openstack/heat-db-create-l2shz" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.906270 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7nv\" (UniqueName: \"kubernetes.io/projected/e24798df-5487-4f50-8a20-8c1890f588ed-kube-api-access-sb7nv\") pod \"heat-db-create-l2shz\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " pod="openstack/heat-db-create-l2shz" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.907277 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24798df-5487-4f50-8a20-8c1890f588ed-operator-scripts\") pod \"heat-db-create-l2shz\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " pod="openstack/heat-db-create-l2shz" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.926366 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7nv\" (UniqueName: \"kubernetes.io/projected/e24798df-5487-4f50-8a20-8c1890f588ed-kube-api-access-sb7nv\") pod \"heat-db-create-l2shz\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " pod="openstack/heat-db-create-l2shz" Jan 28 17:30:54 crc kubenswrapper[4903]: I0128 17:30:54.989826 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-l2shz" Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.008405 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e160-d43c-4d46-b4b1-77e53a64e845-operator-scripts\") pod \"heat-85a4-account-create-update-lgq85\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.008462 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q2j8\" (UniqueName: \"kubernetes.io/projected/a309e160-d43c-4d46-b4b1-77e53a64e845-kube-api-access-4q2j8\") pod \"heat-85a4-account-create-update-lgq85\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.009226 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e160-d43c-4d46-b4b1-77e53a64e845-operator-scripts\") pod \"heat-85a4-account-create-update-lgq85\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.030273 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q2j8\" (UniqueName: \"kubernetes.io/projected/a309e160-d43c-4d46-b4b1-77e53a64e845-kube-api-access-4q2j8\") pod \"heat-85a4-account-create-update-lgq85\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.106111 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.525151 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-l2shz"] Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.664059 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-85a4-account-create-update-lgq85"] Jan 28 17:30:55 crc kubenswrapper[4903]: I0128 17:30:55.897511 4903 scope.go:117] "RemoveContainer" containerID="af0492989d99eb71ded5472f0d26466ad0211c811e1c282be74f357d9631af4b" Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.003432 4903 scope.go:117] "RemoveContainer" containerID="d37440abfd282df5799f942eb878d28eb693d3dcd1093461f2759c914c224219" Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.024044 4903 scope.go:117] "RemoveContainer" containerID="186f1feaa0b666f0bdc3dcd1aec4ac6ce780f2c18b8997b67c277cc0102e267a" Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.060474 4903 scope.go:117] "RemoveContainer" containerID="f42cb245ce1f3d6d1ba33b358c06aa4bb03f9ad77faae1e053dece83d035ae4c" Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.090677 4903 scope.go:117] "RemoveContainer" containerID="9f7af22b77c0548d184633858fe755b01b8f7467a86139bb4fc765cfdfc488a6" Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.413331 4903 generic.go:334] "Generic (PLEG): container finished" podID="a309e160-d43c-4d46-b4b1-77e53a64e845" containerID="0df13b8f96907041432d758342af1cc3472c1c47539664c799ea6d0dcef496b5" exitCode=0 Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.420946 4903 generic.go:334] "Generic (PLEG): container finished" podID="e24798df-5487-4f50-8a20-8c1890f588ed" containerID="f373d7d7a531285bca9707fadf60c03dce46b02197924ad42ddec3726e309b5d" exitCode=0 Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.438221 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-85a4-account-create-update-lgq85" event={"ID":"a309e160-d43c-4d46-b4b1-77e53a64e845","Type":"ContainerDied","Data":"0df13b8f96907041432d758342af1cc3472c1c47539664c799ea6d0dcef496b5"} Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.438298 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-85a4-account-create-update-lgq85" event={"ID":"a309e160-d43c-4d46-b4b1-77e53a64e845","Type":"ContainerStarted","Data":"95ca65a5ee07449f7b58381cc0c699445cff427414b0e082bf96d9f9fd5ae572"} Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.438316 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-l2shz" event={"ID":"e24798df-5487-4f50-8a20-8c1890f588ed","Type":"ContainerDied","Data":"f373d7d7a531285bca9707fadf60c03dce46b02197924ad42ddec3726e309b5d"} Jan 28 17:30:56 crc kubenswrapper[4903]: I0128 17:30:56.438334 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-l2shz" event={"ID":"e24798df-5487-4f50-8a20-8c1890f588ed","Type":"ContainerStarted","Data":"af21d792d97019be64f39f089eabde4cd6f746420eef30b54d21290f5ba769f8"} Jan 28 17:30:57 crc kubenswrapper[4903]: I0128 17:30:57.914247 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:57 crc kubenswrapper[4903]: I0128 17:30:57.918202 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-l2shz" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.093963 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e160-d43c-4d46-b4b1-77e53a64e845-operator-scripts\") pod \"a309e160-d43c-4d46-b4b1-77e53a64e845\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.094131 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24798df-5487-4f50-8a20-8c1890f588ed-operator-scripts\") pod \"e24798df-5487-4f50-8a20-8c1890f588ed\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.094232 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7nv\" (UniqueName: \"kubernetes.io/projected/e24798df-5487-4f50-8a20-8c1890f588ed-kube-api-access-sb7nv\") pod \"e24798df-5487-4f50-8a20-8c1890f588ed\" (UID: \"e24798df-5487-4f50-8a20-8c1890f588ed\") " Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.094296 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q2j8\" (UniqueName: \"kubernetes.io/projected/a309e160-d43c-4d46-b4b1-77e53a64e845-kube-api-access-4q2j8\") pod \"a309e160-d43c-4d46-b4b1-77e53a64e845\" (UID: \"a309e160-d43c-4d46-b4b1-77e53a64e845\") " Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.095194 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a309e160-d43c-4d46-b4b1-77e53a64e845-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a309e160-d43c-4d46-b4b1-77e53a64e845" (UID: "a309e160-d43c-4d46-b4b1-77e53a64e845"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.096388 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24798df-5487-4f50-8a20-8c1890f588ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e24798df-5487-4f50-8a20-8c1890f588ed" (UID: "e24798df-5487-4f50-8a20-8c1890f588ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.109886 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24798df-5487-4f50-8a20-8c1890f588ed-kube-api-access-sb7nv" (OuterVolumeSpecName: "kube-api-access-sb7nv") pod "e24798df-5487-4f50-8a20-8c1890f588ed" (UID: "e24798df-5487-4f50-8a20-8c1890f588ed"). InnerVolumeSpecName "kube-api-access-sb7nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.109995 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a309e160-d43c-4d46-b4b1-77e53a64e845-kube-api-access-4q2j8" (OuterVolumeSpecName: "kube-api-access-4q2j8") pod "a309e160-d43c-4d46-b4b1-77e53a64e845" (UID: "a309e160-d43c-4d46-b4b1-77e53a64e845"). InnerVolumeSpecName "kube-api-access-4q2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.197334 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24798df-5487-4f50-8a20-8c1890f588ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.197377 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7nv\" (UniqueName: \"kubernetes.io/projected/e24798df-5487-4f50-8a20-8c1890f588ed-kube-api-access-sb7nv\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.197387 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q2j8\" (UniqueName: \"kubernetes.io/projected/a309e160-d43c-4d46-b4b1-77e53a64e845-kube-api-access-4q2j8\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.197399 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a309e160-d43c-4d46-b4b1-77e53a64e845-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.441316 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-l2shz" event={"ID":"e24798df-5487-4f50-8a20-8c1890f588ed","Type":"ContainerDied","Data":"af21d792d97019be64f39f089eabde4cd6f746420eef30b54d21290f5ba769f8"} Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.441358 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af21d792d97019be64f39f089eabde4cd6f746420eef30b54d21290f5ba769f8" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.441355 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-l2shz" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.443436 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-85a4-account-create-update-lgq85" event={"ID":"a309e160-d43c-4d46-b4b1-77e53a64e845","Type":"ContainerDied","Data":"95ca65a5ee07449f7b58381cc0c699445cff427414b0e082bf96d9f9fd5ae572"} Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.443505 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95ca65a5ee07449f7b58381cc0c699445cff427414b0e082bf96d9f9fd5ae572" Jan 28 17:30:58 crc kubenswrapper[4903]: I0128 17:30:58.443472 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-85a4-account-create-update-lgq85" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.843472 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-ndrvj"] Jan 28 17:30:59 crc kubenswrapper[4903]: E0128 17:30:59.844188 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a309e160-d43c-4d46-b4b1-77e53a64e845" containerName="mariadb-account-create-update" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.844200 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a309e160-d43c-4d46-b4b1-77e53a64e845" containerName="mariadb-account-create-update" Jan 28 17:30:59 crc kubenswrapper[4903]: E0128 17:30:59.844209 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24798df-5487-4f50-8a20-8c1890f588ed" containerName="mariadb-database-create" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.844215 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24798df-5487-4f50-8a20-8c1890f588ed" containerName="mariadb-database-create" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.844438 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a309e160-d43c-4d46-b4b1-77e53a64e845" containerName="mariadb-account-create-update" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.844461 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24798df-5487-4f50-8a20-8c1890f588ed" containerName="mariadb-database-create" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.845102 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ndrvj" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.854491 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.854678 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ndrvj"] Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.854786 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hkzjx" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.935088 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvzr\" (UniqueName: \"kubernetes.io/projected/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-kube-api-access-klvzr\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.935151 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-combined-ca-bundle\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:30:59 crc kubenswrapper[4903]: I0128 17:30:59.935349 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-config-data\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.037318 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-config-data\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.037601 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klvzr\" (UniqueName: \"kubernetes.io/projected/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-kube-api-access-klvzr\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.037741 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-combined-ca-bundle\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.043581 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-combined-ca-bundle\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.043653 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-config-data\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.059210 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klvzr\" (UniqueName: \"kubernetes.io/projected/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-kube-api-access-klvzr\") pod \"heat-db-sync-ndrvj\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.181147 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:00 crc kubenswrapper[4903]: I0128 17:31:00.733265 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ndrvj"] Jan 28 17:31:01 crc kubenswrapper[4903]: I0128 17:31:01.517106 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ndrvj" event={"ID":"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2","Type":"ContainerStarted","Data":"8646dd5b3e5cced0e2ae310114b0b3c0439b062b2472742a690e936a3b79459c"} Jan 28 17:31:03 crc kubenswrapper[4903]: I0128 17:31:03.471195 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:31:03 crc kubenswrapper[4903]: I0128 17:31:03.471754 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:31:07 crc kubenswrapper[4903]: I0128 17:31:07.062066 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w9wvx"] Jan 28 17:31:07 crc kubenswrapper[4903]: I0128 17:31:07.079174 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d358-account-create-update-jlj5p"] Jan 28 17:31:07 crc kubenswrapper[4903]: I0128 17:31:07.089046 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w9wvx"] Jan 28 17:31:07 crc kubenswrapper[4903]: I0128 17:31:07.104099 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d358-account-create-update-jlj5p"] Jan 28 17:31:08 crc kubenswrapper[4903]: I0128 17:31:08.423309 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e37ca5-27ba-423f-86c5-854a2119285c" path="/var/lib/kubelet/pods/44e37ca5-27ba-423f-86c5-854a2119285c/volumes" Jan 28 17:31:08 crc kubenswrapper[4903]: I0128 17:31:08.425307 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e053efc4-84f0-4d97-a334-180738eb2791" path="/var/lib/kubelet/pods/e053efc4-84f0-4d97-a334-180738eb2791/volumes" Jan 28 17:31:09 crc kubenswrapper[4903]: I0128 17:31:09.602058 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ndrvj" event={"ID":"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2","Type":"ContainerStarted","Data":"9cd945a6a2689175914f96b46101de957c69f45dfbd7651bc8eb024e0bc09b47"} Jan 28 17:31:09 crc kubenswrapper[4903]: I0128 17:31:09.624918 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-ndrvj" podStartSLOduration=2.106027175 podStartE2EDuration="10.624898161s" podCreationTimestamp="2026-01-28 17:30:59 +0000 UTC" firstStartedPulling="2026-01-28 17:31:00.747096709 +0000 UTC m=+6333.023068220" lastFinishedPulling="2026-01-28 17:31:09.265967695 +0000 UTC m=+6341.541939206" observedRunningTime="2026-01-28 17:31:09.619346243 +0000 UTC m=+6341.895317754" watchObservedRunningTime="2026-01-28 17:31:09.624898161 +0000 UTC m=+6341.900869672" Jan 28 17:31:12 crc kubenswrapper[4903]: I0128 17:31:12.637224 4903 generic.go:334] "Generic (PLEG): container finished" podID="77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" containerID="9cd945a6a2689175914f96b46101de957c69f45dfbd7651bc8eb024e0bc09b47" exitCode=0 Jan 28 17:31:12 crc kubenswrapper[4903]: I0128 17:31:12.637339 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ndrvj" event={"ID":"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2","Type":"ContainerDied","Data":"9cd945a6a2689175914f96b46101de957c69f45dfbd7651bc8eb024e0bc09b47"} Jan 28 17:31:13 crc kubenswrapper[4903]: I0128 17:31:13.472789 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6986fc9fc8-xw4pb" podUID="b8fb49c3-b804-40e7-8866-93e7f4ff9d39" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.114:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.114:8443: connect: connection refused" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.047127 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n7jdx"] Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.063751 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n7jdx"] Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.106952 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.210897 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-config-data\") pod \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.210972 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-combined-ca-bundle\") pod \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.211057 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klvzr\" (UniqueName: \"kubernetes.io/projected/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-kube-api-access-klvzr\") pod \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\" (UID: \"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2\") " Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.217044 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-kube-api-access-klvzr" (OuterVolumeSpecName: "kube-api-access-klvzr") pod "77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" (UID: "77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2"). InnerVolumeSpecName "kube-api-access-klvzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.252108 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" (UID: "77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.309877 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-config-data" (OuterVolumeSpecName: "config-data") pod "77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" (UID: "77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.314429 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.314464 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.314498 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klvzr\" (UniqueName: \"kubernetes.io/projected/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2-kube-api-access-klvzr\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.429767 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad604886-c21a-4d1f-bf2b-d1a9765ae9db" path="/var/lib/kubelet/pods/ad604886-c21a-4d1f-bf2b-d1a9765ae9db/volumes" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.673436 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ndrvj" event={"ID":"77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2","Type":"ContainerDied","Data":"8646dd5b3e5cced0e2ae310114b0b3c0439b062b2472742a690e936a3b79459c"} Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.673479 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8646dd5b3e5cced0e2ae310114b0b3c0439b062b2472742a690e936a3b79459c" Jan 28 17:31:14 crc kubenswrapper[4903]: I0128 17:31:14.673611 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ndrvj" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.764518 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-979fbf544-pwp5h"] Jan 28 17:31:15 crc kubenswrapper[4903]: E0128 17:31:15.765371 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" containerName="heat-db-sync" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.765390 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" containerName="heat-db-sync" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.765684 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" containerName="heat-db-sync" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.769311 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.773162 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.774435 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.775038 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hkzjx" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.778023 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-979fbf544-pwp5h"] Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.951673 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-combined-ca-bundle\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.951807 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.951873 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data-custom\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.951943 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t94vt\" (UniqueName: \"kubernetes.io/projected/8855b993-cee5-4a99-b881-0b8f8c04863a-kube-api-access-t94vt\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.983007 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6887c74856-vzjnd"] Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.984911 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:15 crc kubenswrapper[4903]: I0128 17:31:15.998015 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.008511 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7544b59b75-jjtkw"] Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.011821 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.016885 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.041790 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7544b59b75-jjtkw"] Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.057709 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data-custom\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.057853 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t94vt\" (UniqueName: \"kubernetes.io/projected/8855b993-cee5-4a99-b881-0b8f8c04863a-kube-api-access-t94vt\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.057933 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-combined-ca-bundle\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.057954 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data-custom\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.058008 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nff4r\" (UniqueName: \"kubernetes.io/projected/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-kube-api-access-nff4r\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.058144 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.058172 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.058211 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-combined-ca-bundle\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.068078 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data-custom\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.068094 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-combined-ca-bundle\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.071473 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6887c74856-vzjnd"] Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.071811 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.089687 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t94vt\" (UniqueName: \"kubernetes.io/projected/8855b993-cee5-4a99-b881-0b8f8c04863a-kube-api-access-t94vt\") pod \"heat-engine-979fbf544-pwp5h\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.102426 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.163854 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.164216 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-combined-ca-bundle\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.164250 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data-custom\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.164301 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.164380 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-combined-ca-bundle\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.164475 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l729c\" (UniqueName: \"kubernetes.io/projected/9db04f6f-b617-4ba4-8efc-64743910ba2a-kube-api-access-l729c\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.164510 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data-custom\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.164582 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nff4r\" (UniqueName: \"kubernetes.io/projected/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-kube-api-access-nff4r\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.174833 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.180835 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data-custom\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.194497 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-combined-ca-bundle\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.197282 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nff4r\" (UniqueName: \"kubernetes.io/projected/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-kube-api-access-nff4r\") pod \"heat-api-6887c74856-vzjnd\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.265965 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.266075 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-combined-ca-bundle\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.266139 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l729c\" (UniqueName: \"kubernetes.io/projected/9db04f6f-b617-4ba4-8efc-64743910ba2a-kube-api-access-l729c\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.266236 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data-custom\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.270958 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.275173 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data-custom\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.281683 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l729c\" (UniqueName: \"kubernetes.io/projected/9db04f6f-b617-4ba4-8efc-64743910ba2a-kube-api-access-l729c\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.287447 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-combined-ca-bundle\") pod \"heat-cfnapi-7544b59b75-jjtkw\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.320105 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.380349 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:16 crc kubenswrapper[4903]: I0128 17:31:16.916795 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-979fbf544-pwp5h"] Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.060269 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6887c74856-vzjnd"] Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.169796 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7544b59b75-jjtkw"] Jan 28 17:31:17 crc kubenswrapper[4903]: W0128 17:31:17.176923 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9db04f6f_b617_4ba4_8efc_64743910ba2a.slice/crio-477a1553443f0d758700c943f93bd1d0f56aa9bba6fd592ca8e6778590bc9b4f WatchSource:0}: Error finding container 477a1553443f0d758700c943f93bd1d0f56aa9bba6fd592ca8e6778590bc9b4f: Status 404 returned error can't find the container with id 477a1553443f0d758700c943f93bd1d0f56aa9bba6fd592ca8e6778590bc9b4f Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.716360 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" event={"ID":"9db04f6f-b617-4ba4-8efc-64743910ba2a","Type":"ContainerStarted","Data":"477a1553443f0d758700c943f93bd1d0f56aa9bba6fd592ca8e6778590bc9b4f"} Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.717853 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6887c74856-vzjnd" event={"ID":"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e","Type":"ContainerStarted","Data":"a678eaf58a056b209fa6e9b800db805fc6534154a72ddfa8ffc440b80e36d7c2"} Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.723447 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-979fbf544-pwp5h" event={"ID":"8855b993-cee5-4a99-b881-0b8f8c04863a","Type":"ContainerStarted","Data":"6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691"} Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.723508 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-979fbf544-pwp5h" event={"ID":"8855b993-cee5-4a99-b881-0b8f8c04863a","Type":"ContainerStarted","Data":"0530bd6db7e457776772c2efa8c26b8e01e8e442453f492095c516d523fa7109"} Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.723736 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:17 crc kubenswrapper[4903]: I0128 17:31:17.746796 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-979fbf544-pwp5h" podStartSLOduration=2.746775615 podStartE2EDuration="2.746775615s" podCreationTimestamp="2026-01-28 17:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:31:17.740349873 +0000 UTC m=+6350.016321384" watchObservedRunningTime="2026-01-28 17:31:17.746775615 +0000 UTC m=+6350.022747126" Jan 28 17:31:20 crc kubenswrapper[4903]: I0128 17:31:20.826831 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" event={"ID":"9db04f6f-b617-4ba4-8efc-64743910ba2a","Type":"ContainerStarted","Data":"45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d"} Jan 28 17:31:20 crc kubenswrapper[4903]: I0128 17:31:20.829244 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:20 crc kubenswrapper[4903]: I0128 17:31:20.841013 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6887c74856-vzjnd" event={"ID":"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e","Type":"ContainerStarted","Data":"46862f7215c1a9b48c310930b526f2fdd66d093553da5af38eb7ac0bf13da949"} Jan 28 17:31:20 crc kubenswrapper[4903]: I0128 17:31:20.842125 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:20 crc kubenswrapper[4903]: I0128 17:31:20.858912 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" podStartSLOduration=3.197755016 podStartE2EDuration="5.858888546s" podCreationTimestamp="2026-01-28 17:31:15 +0000 UTC" firstStartedPulling="2026-01-28 17:31:17.180565037 +0000 UTC m=+6349.456536548" lastFinishedPulling="2026-01-28 17:31:19.841698567 +0000 UTC m=+6352.117670078" observedRunningTime="2026-01-28 17:31:20.854027706 +0000 UTC m=+6353.129999227" watchObservedRunningTime="2026-01-28 17:31:20.858888546 +0000 UTC m=+6353.134860057" Jan 28 17:31:20 crc kubenswrapper[4903]: I0128 17:31:20.897351 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6887c74856-vzjnd" podStartSLOduration=3.135087334 podStartE2EDuration="5.897332842s" podCreationTimestamp="2026-01-28 17:31:15 +0000 UTC" firstStartedPulling="2026-01-28 17:31:17.065679592 +0000 UTC m=+6349.341651103" lastFinishedPulling="2026-01-28 17:31:19.8279251 +0000 UTC m=+6352.103896611" observedRunningTime="2026-01-28 17:31:20.872978282 +0000 UTC m=+6353.148949793" watchObservedRunningTime="2026-01-28 17:31:20.897332842 +0000 UTC m=+6353.173304353" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.776168 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-dcd69c9cc-m72v4"] Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.778697 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.821596 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-54778bc456-8clgb"] Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.823197 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.849618 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54778bc456-8clgb"] Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.865441 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dcd69c9cc-m72v4"] Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.886194 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-79ffcb9bd-xdqvw"] Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.887498 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.892983 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6hxr\" (UniqueName: \"kubernetes.io/projected/f0171ff6-005a-49a8-95ac-20bae49ba638-kube-api-access-z6hxr\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.893061 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-config-data-custom\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.893101 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-combined-ca-bundle\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.896107 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-config-data\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:23 crc kubenswrapper[4903]: I0128 17:31:23.964095 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79ffcb9bd-xdqvw"] Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019357 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrzqc\" (UniqueName: \"kubernetes.io/projected/269698b4-d594-4999-8db0-f29938cb9356-kube-api-access-wrzqc\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019434 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data-custom\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019513 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6hxr\" (UniqueName: \"kubernetes.io/projected/f0171ff6-005a-49a8-95ac-20bae49ba638-kube-api-access-z6hxr\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019665 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-config-data-custom\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019743 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-combined-ca-bundle\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019778 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-combined-ca-bundle\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019833 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.019877 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.020064 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-config-data\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.020117 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjsl4\" (UniqueName: \"kubernetes.io/projected/c8c6dd3b-ec36-4327-94f1-252a820fe38d-kube-api-access-gjsl4\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.020230 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-combined-ca-bundle\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.020333 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data-custom\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.052309 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-config-data\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.053142 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-combined-ca-bundle\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.076390 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0171ff6-005a-49a8-95ac-20bae49ba638-config-data-custom\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.108633 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6hxr\" (UniqueName: \"kubernetes.io/projected/f0171ff6-005a-49a8-95ac-20bae49ba638-kube-api-access-z6hxr\") pod \"heat-engine-dcd69c9cc-m72v4\" (UID: \"f0171ff6-005a-49a8-95ac-20bae49ba638\") " pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.112843 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.168905 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-combined-ca-bundle\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.168965 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.168992 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.169060 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjsl4\" (UniqueName: \"kubernetes.io/projected/c8c6dd3b-ec36-4327-94f1-252a820fe38d-kube-api-access-gjsl4\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.169141 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-combined-ca-bundle\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.169178 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data-custom\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.169258 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrzqc\" (UniqueName: \"kubernetes.io/projected/269698b4-d594-4999-8db0-f29938cb9356-kube-api-access-wrzqc\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.169291 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data-custom\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.186866 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data-custom\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.190945 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.194603 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-combined-ca-bundle\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.197935 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.200258 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjsl4\" (UniqueName: \"kubernetes.io/projected/c8c6dd3b-ec36-4327-94f1-252a820fe38d-kube-api-access-gjsl4\") pod \"heat-cfnapi-54778bc456-8clgb\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.212003 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrzqc\" (UniqueName: \"kubernetes.io/projected/269698b4-d594-4999-8db0-f29938cb9356-kube-api-access-wrzqc\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.213871 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data-custom\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.215255 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-combined-ca-bundle\") pod \"heat-api-79ffcb9bd-xdqvw\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.269223 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.296151 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:24 crc kubenswrapper[4903]: I0128 17:31:24.948288 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-dcd69c9cc-m72v4"] Jan 28 17:31:24 crc kubenswrapper[4903]: W0128 17:31:24.950487 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0171ff6_005a_49a8_95ac_20bae49ba638.slice/crio-e80fb12909b5bc16af4798f0c88128523559b4cdaf5638a7dab539f80eadacc2 WatchSource:0}: Error finding container e80fb12909b5bc16af4798f0c88128523559b4cdaf5638a7dab539f80eadacc2: Status 404 returned error can't find the container with id e80fb12909b5bc16af4798f0c88128523559b4cdaf5638a7dab539f80eadacc2 Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.019217 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dcd69c9cc-m72v4" event={"ID":"f0171ff6-005a-49a8-95ac-20bae49ba638","Type":"ContainerStarted","Data":"e80fb12909b5bc16af4798f0c88128523559b4cdaf5638a7dab539f80eadacc2"} Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.074395 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-54778bc456-8clgb"] Jan 28 17:31:25 crc kubenswrapper[4903]: W0128 17:31:25.077836 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8c6dd3b_ec36_4327_94f1_252a820fe38d.slice/crio-44f4802938196e2bbd6d509314c2de8077f8713c0b6c07a7405ff0aa8d0163ad WatchSource:0}: Error finding container 44f4802938196e2bbd6d509314c2de8077f8713c0b6c07a7405ff0aa8d0163ad: Status 404 returned error can't find the container with id 44f4802938196e2bbd6d509314c2de8077f8713c0b6c07a7405ff0aa8d0163ad Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.195826 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-79ffcb9bd-xdqvw"] Jan 28 17:31:25 crc kubenswrapper[4903]: W0128 17:31:25.204468 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod269698b4_d594_4999_8db0_f29938cb9356.slice/crio-0511403a1541dbc98d680907777c908c06f3da57c4443b92c380bebbfdf8e5dc WatchSource:0}: Error finding container 0511403a1541dbc98d680907777c908c06f3da57c4443b92c380bebbfdf8e5dc: Status 404 returned error can't find the container with id 0511403a1541dbc98d680907777c908c06f3da57c4443b92c380bebbfdf8e5dc Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.616826 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7544b59b75-jjtkw"] Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.617090 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerName="heat-cfnapi" containerID="cri-o://45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d" gracePeriod=60 Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.656902 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.120:8000/healthcheck\": EOF" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.661809 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6496f95c59-bgw2h"] Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.664279 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.670951 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.672300 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.680245 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6887c74856-vzjnd"] Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.680598 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6887c74856-vzjnd" podUID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" containerName="heat-api" containerID="cri-o://46862f7215c1a9b48c310930b526f2fdd66d093553da5af38eb7ac0bf13da949" gracePeriod=60 Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.698482 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6496f95c59-bgw2h"] Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.714210 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-config-data\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.714265 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-internal-tls-certs\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.714373 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-public-tls-certs\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.714690 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-config-data-custom\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.714737 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfgv\" (UniqueName: \"kubernetes.io/projected/314e609d-1613-4bd4-9f99-1646217ae196-kube-api-access-wgfgv\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.714859 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-combined-ca-bundle\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.762655 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5b64cbf4cb-phdrn"] Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.764080 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.766128 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.777499 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.806524 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b64cbf4cb-phdrn"] Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816448 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-config-data-custom\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816513 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-config-data-custom\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816568 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgfgv\" (UniqueName: \"kubernetes.io/projected/314e609d-1613-4bd4-9f99-1646217ae196-kube-api-access-wgfgv\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816613 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-config-data\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816645 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6f5j\" (UniqueName: \"kubernetes.io/projected/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-kube-api-access-s6f5j\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816696 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-combined-ca-bundle\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816734 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-internal-tls-certs\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816771 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-internal-tls-certs\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816796 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-config-data\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816846 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-public-tls-certs\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816898 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-public-tls-certs\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.816934 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-combined-ca-bundle\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.823881 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-combined-ca-bundle\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.824635 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-config-data-custom\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.826087 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-public-tls-certs\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.827104 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-config-data\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.827769 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314e609d-1613-4bd4-9f99-1646217ae196-internal-tls-certs\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.855171 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgfgv\" (UniqueName: \"kubernetes.io/projected/314e609d-1613-4bd4-9f99-1646217ae196-kube-api-access-wgfgv\") pod \"heat-cfnapi-6496f95c59-bgw2h\" (UID: \"314e609d-1613-4bd4-9f99-1646217ae196\") " pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.919373 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-config-data-custom\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.919481 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-config-data\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.919519 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6f5j\" (UniqueName: \"kubernetes.io/projected/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-kube-api-access-s6f5j\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.920380 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-internal-tls-certs\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.920510 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-public-tls-certs\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.920635 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-combined-ca-bundle\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.923756 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-config-data-custom\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.926278 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-config-data\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.926714 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-internal-tls-certs\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.927383 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-public-tls-certs\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.928035 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-combined-ca-bundle\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.949265 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6f5j\" (UniqueName: \"kubernetes.io/projected/faf224d0-11f6-4cda-955d-b0dd9aa30bd7-kube-api-access-s6f5j\") pod \"heat-api-5b64cbf4cb-phdrn\" (UID: \"faf224d0-11f6-4cda-955d-b0dd9aa30bd7\") " pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:25 crc kubenswrapper[4903]: I0128 17:31:25.986228 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.040552 4903 generic.go:334] "Generic (PLEG): container finished" podID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerID="3750f21ca8622b6ba7b5e33c59b6b17d5e9e2f00d3c5d7feeb5f1f6b0fb374e9" exitCode=1 Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.040648 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54778bc456-8clgb" event={"ID":"c8c6dd3b-ec36-4327-94f1-252a820fe38d","Type":"ContainerDied","Data":"3750f21ca8622b6ba7b5e33c59b6b17d5e9e2f00d3c5d7feeb5f1f6b0fb374e9"} Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.040674 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54778bc456-8clgb" event={"ID":"c8c6dd3b-ec36-4327-94f1-252a820fe38d","Type":"ContainerStarted","Data":"44f4802938196e2bbd6d509314c2de8077f8713c0b6c07a7405ff0aa8d0163ad"} Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.041272 4903 scope.go:117] "RemoveContainer" containerID="3750f21ca8622b6ba7b5e33c59b6b17d5e9e2f00d3c5d7feeb5f1f6b0fb374e9" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.049409 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-dcd69c9cc-m72v4" event={"ID":"f0171ff6-005a-49a8-95ac-20bae49ba638","Type":"ContainerStarted","Data":"2d31fa77c588b63e753e2e73c2cb19ec9b2e23677e7e5b3535c9d817c4da2739"} Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.050312 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.073012 4903 generic.go:334] "Generic (PLEG): container finished" podID="269698b4-d594-4999-8db0-f29938cb9356" containerID="563766c8eb932bca0b32c30efe22387e14c2356ef6b26a1844647a8578bcb518" exitCode=1 Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.073067 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79ffcb9bd-xdqvw" event={"ID":"269698b4-d594-4999-8db0-f29938cb9356","Type":"ContainerDied","Data":"563766c8eb932bca0b32c30efe22387e14c2356ef6b26a1844647a8578bcb518"} Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.073101 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79ffcb9bd-xdqvw" event={"ID":"269698b4-d594-4999-8db0-f29938cb9356","Type":"ContainerStarted","Data":"0511403a1541dbc98d680907777c908c06f3da57c4443b92c380bebbfdf8e5dc"} Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.073854 4903 scope.go:117] "RemoveContainer" containerID="563766c8eb932bca0b32c30efe22387e14c2356ef6b26a1844647a8578bcb518" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.092021 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.102743 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-dcd69c9cc-m72v4" podStartSLOduration=3.102724783 podStartE2EDuration="3.102724783s" podCreationTimestamp="2026-01-28 17:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:31:26.097944755 +0000 UTC m=+6358.373916266" watchObservedRunningTime="2026-01-28 17:31:26.102724783 +0000 UTC m=+6358.378696294" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.614164 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.614588 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.695333 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6496f95c59-bgw2h"] Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.901906 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:31:26 crc kubenswrapper[4903]: I0128 17:31:26.963819 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b64cbf4cb-phdrn"] Jan 28 17:31:26 crc kubenswrapper[4903]: W0128 17:31:26.992704 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf224d0_11f6_4cda_955d_b0dd9aa30bd7.slice/crio-711d447dbed3e0dc2ca953fb3e843bd17a1612b21192df41e6461966d28e7af6 WatchSource:0}: Error finding container 711d447dbed3e0dc2ca953fb3e843bd17a1612b21192df41e6461966d28e7af6: Status 404 returned error can't find the container with id 711d447dbed3e0dc2ca953fb3e843bd17a1612b21192df41e6461966d28e7af6 Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.094634 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6496f95c59-bgw2h" event={"ID":"314e609d-1613-4bd4-9f99-1646217ae196","Type":"ContainerStarted","Data":"94582f44cba79909b9d7ae51978dc6988f942898a6ad0dda827647c7972f44f5"} Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.099694 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54778bc456-8clgb" event={"ID":"c8c6dd3b-ec36-4327-94f1-252a820fe38d","Type":"ContainerStarted","Data":"83a38cbd4040fd5475848a5c7e59a53caa927252ad45dd6fd49e7de4ae562c39"} Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.099899 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.103810 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b64cbf4cb-phdrn" event={"ID":"faf224d0-11f6-4cda-955d-b0dd9aa30bd7","Type":"ContainerStarted","Data":"711d447dbed3e0dc2ca953fb3e843bd17a1612b21192df41e6461966d28e7af6"} Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.116891 4903 generic.go:334] "Generic (PLEG): container finished" podID="269698b4-d594-4999-8db0-f29938cb9356" containerID="163d9f47b2d4f34ce9af557c982458b7f7930de164c45bb24bebfb6a7b86f39d" exitCode=1 Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.117975 4903 scope.go:117] "RemoveContainer" containerID="163d9f47b2d4f34ce9af557c982458b7f7930de164c45bb24bebfb6a7b86f39d" Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.118673 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79ffcb9bd-xdqvw" event={"ID":"269698b4-d594-4999-8db0-f29938cb9356","Type":"ContainerDied","Data":"163d9f47b2d4f34ce9af557c982458b7f7930de164c45bb24bebfb6a7b86f39d"} Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.118779 4903 scope.go:117] "RemoveContainer" containerID="563766c8eb932bca0b32c30efe22387e14c2356ef6b26a1844647a8578bcb518" Jan 28 17:31:27 crc kubenswrapper[4903]: E0128 17:31:27.118816 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-79ffcb9bd-xdqvw_openstack(269698b4-d594-4999-8db0-f29938cb9356)\"" pod="openstack/heat-api-79ffcb9bd-xdqvw" podUID="269698b4-d594-4999-8db0-f29938cb9356" Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.142519 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-54778bc456-8clgb" podStartSLOduration=4.142501124 podStartE2EDuration="4.142501124s" podCreationTimestamp="2026-01-28 17:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:31:27.133390011 +0000 UTC m=+6359.409361512" watchObservedRunningTime="2026-01-28 17:31:27.142501124 +0000 UTC m=+6359.418472635" Jan 28 17:31:27 crc kubenswrapper[4903]: I0128 17:31:27.950518 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.157243 4903 generic.go:334] "Generic (PLEG): container finished" podID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerID="83a38cbd4040fd5475848a5c7e59a53caa927252ad45dd6fd49e7de4ae562c39" exitCode=1 Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.157617 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54778bc456-8clgb" event={"ID":"c8c6dd3b-ec36-4327-94f1-252a820fe38d","Type":"ContainerDied","Data":"83a38cbd4040fd5475848a5c7e59a53caa927252ad45dd6fd49e7de4ae562c39"} Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.157947 4903 scope.go:117] "RemoveContainer" containerID="3750f21ca8622b6ba7b5e33c59b6b17d5e9e2f00d3c5d7feeb5f1f6b0fb374e9" Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.158115 4903 scope.go:117] "RemoveContainer" containerID="83a38cbd4040fd5475848a5c7e59a53caa927252ad45dd6fd49e7de4ae562c39" Jan 28 17:31:28 crc kubenswrapper[4903]: E0128 17:31:28.158504 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54778bc456-8clgb_openstack(c8c6dd3b-ec36-4327-94f1-252a820fe38d)\"" pod="openstack/heat-cfnapi-54778bc456-8clgb" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.168107 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b64cbf4cb-phdrn" event={"ID":"faf224d0-11f6-4cda-955d-b0dd9aa30bd7","Type":"ContainerStarted","Data":"08457543e5c2eb982e7d48603d5085aeec983a5c33d68cad87f28cc8de04d63a"} Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.169264 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.180444 4903 scope.go:117] "RemoveContainer" containerID="163d9f47b2d4f34ce9af557c982458b7f7930de164c45bb24bebfb6a7b86f39d" Jan 28 17:31:28 crc kubenswrapper[4903]: E0128 17:31:28.180709 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-79ffcb9bd-xdqvw_openstack(269698b4-d594-4999-8db0-f29938cb9356)\"" pod="openstack/heat-api-79ffcb9bd-xdqvw" podUID="269698b4-d594-4999-8db0-f29938cb9356" Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.207855 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6496f95c59-bgw2h" event={"ID":"314e609d-1613-4bd4-9f99-1646217ae196","Type":"ContainerStarted","Data":"a411cc329a39985ad3ba0eeefe89217494b8a847395deb57fbc3ec100844d59e"} Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.207927 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.269172 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6496f95c59-bgw2h" podStartSLOduration=3.269151494 podStartE2EDuration="3.269151494s" podCreationTimestamp="2026-01-28 17:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:31:28.240090068 +0000 UTC m=+6360.516061579" watchObservedRunningTime="2026-01-28 17:31:28.269151494 +0000 UTC m=+6360.545123005" Jan 28 17:31:28 crc kubenswrapper[4903]: I0128 17:31:28.272150 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5b64cbf4cb-phdrn" podStartSLOduration=3.272142873 podStartE2EDuration="3.272142873s" podCreationTimestamp="2026-01-28 17:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:31:28.26304058 +0000 UTC m=+6360.539012091" watchObservedRunningTime="2026-01-28 17:31:28.272142873 +0000 UTC m=+6360.548114384" Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.218591 4903 scope.go:117] "RemoveContainer" containerID="83a38cbd4040fd5475848a5c7e59a53caa927252ad45dd6fd49e7de4ae562c39" Jan 28 17:31:29 crc kubenswrapper[4903]: E0128 17:31:29.218826 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54778bc456-8clgb_openstack(c8c6dd3b-ec36-4327-94f1-252a820fe38d)\"" pod="openstack/heat-cfnapi-54778bc456-8clgb" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.270682 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.297201 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.297250 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.298046 4903 scope.go:117] "RemoveContainer" containerID="163d9f47b2d4f34ce9af557c982458b7f7930de164c45bb24bebfb6a7b86f39d" Jan 28 17:31:29 crc kubenswrapper[4903]: E0128 17:31:29.298344 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-79ffcb9bd-xdqvw_openstack(269698b4-d594-4999-8db0-f29938cb9356)\"" pod="openstack/heat-api-79ffcb9bd-xdqvw" podUID="269698b4-d594-4999-8db0-f29938cb9356" Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.556205 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6986fc9fc8-xw4pb" Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.638679 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-795ddfcdd6-blwfr"] Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.638953 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon-log" containerID="cri-o://9464b679a38844c837cd42a991491d3f7326090371f88a02dcfa71174cdb3d87" gracePeriod=30 Jan 28 17:31:29 crc kubenswrapper[4903]: I0128 17:31:29.639497 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" containerID="cri-o://91d199ea977810d2e63573cd3daf091df34f6e04e74c0670e3acde3d4d19088b" gracePeriod=30 Jan 28 17:31:30 crc kubenswrapper[4903]: I0128 17:31:30.226143 4903 scope.go:117] "RemoveContainer" containerID="83a38cbd4040fd5475848a5c7e59a53caa927252ad45dd6fd49e7de4ae562c39" Jan 28 17:31:30 crc kubenswrapper[4903]: E0128 17:31:30.226468 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-54778bc456-8clgb_openstack(c8c6dd3b-ec36-4327-94f1-252a820fe38d)\"" pod="openstack/heat-cfnapi-54778bc456-8clgb" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.130064 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.120:8000/healthcheck\": read tcp 10.217.0.2:51118->10.217.1.120:8000: read: connection reset by peer" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.170898 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6887c74856-vzjnd" podUID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.119:8004/healthcheck\": read tcp 10.217.0.2:48676->10.217.1.119:8004: read: connection reset by peer" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.271775 4903 generic.go:334] "Generic (PLEG): container finished" podID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" containerID="46862f7215c1a9b48c310930b526f2fdd66d093553da5af38eb7ac0bf13da949" exitCode=0 Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.271939 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6887c74856-vzjnd" event={"ID":"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e","Type":"ContainerDied","Data":"46862f7215c1a9b48c310930b526f2fdd66d093553da5af38eb7ac0bf13da949"} Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.324131 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6887c74856-vzjnd" podUID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.119:8004/healthcheck\": dial tcp 10.217.1.119:8004: connect: connection refused" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.384745 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.120:8000/healthcheck\": dial tcp 10.217.1.120:8000: connect: connection refused" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.737011 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.855315 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.899926 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data\") pod \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.900150 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nff4r\" (UniqueName: \"kubernetes.io/projected/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-kube-api-access-nff4r\") pod \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.900984 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-combined-ca-bundle\") pod \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.901024 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data-custom\") pod \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\" (UID: \"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e\") " Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.907851 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-kube-api-access-nff4r" (OuterVolumeSpecName: "kube-api-access-nff4r") pod "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" (UID: "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e"). InnerVolumeSpecName "kube-api-access-nff4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.908025 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" (UID: "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.930411 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" (UID: "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:31 crc kubenswrapper[4903]: I0128 17:31:31.955692 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data" (OuterVolumeSpecName: "config-data") pod "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" (UID: "0d09cee0-da1d-4de4-bbe5-19c78b5fd58e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.002938 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data-custom\") pod \"9db04f6f-b617-4ba4-8efc-64743910ba2a\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.003015 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data\") pod \"9db04f6f-b617-4ba4-8efc-64743910ba2a\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.003099 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l729c\" (UniqueName: \"kubernetes.io/projected/9db04f6f-b617-4ba4-8efc-64743910ba2a-kube-api-access-l729c\") pod \"9db04f6f-b617-4ba4-8efc-64743910ba2a\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.003275 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-combined-ca-bundle\") pod \"9db04f6f-b617-4ba4-8efc-64743910ba2a\" (UID: \"9db04f6f-b617-4ba4-8efc-64743910ba2a\") " Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.004607 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nff4r\" (UniqueName: \"kubernetes.io/projected/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-kube-api-access-nff4r\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.004671 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.004686 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.004701 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.010080 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db04f6f-b617-4ba4-8efc-64743910ba2a-kube-api-access-l729c" (OuterVolumeSpecName: "kube-api-access-l729c") pod "9db04f6f-b617-4ba4-8efc-64743910ba2a" (UID: "9db04f6f-b617-4ba4-8efc-64743910ba2a"). InnerVolumeSpecName "kube-api-access-l729c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.012202 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9db04f6f-b617-4ba4-8efc-64743910ba2a" (UID: "9db04f6f-b617-4ba4-8efc-64743910ba2a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.031342 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9db04f6f-b617-4ba4-8efc-64743910ba2a" (UID: "9db04f6f-b617-4ba4-8efc-64743910ba2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.079753 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data" (OuterVolumeSpecName: "config-data") pod "9db04f6f-b617-4ba4-8efc-64743910ba2a" (UID: "9db04f6f-b617-4ba4-8efc-64743910ba2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.106674 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.106710 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.106721 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l729c\" (UniqueName: \"kubernetes.io/projected/9db04f6f-b617-4ba4-8efc-64743910ba2a-kube-api-access-l729c\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.106732 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db04f6f-b617-4ba4-8efc-64743910ba2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.283575 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6887c74856-vzjnd" event={"ID":"0d09cee0-da1d-4de4-bbe5-19c78b5fd58e","Type":"ContainerDied","Data":"a678eaf58a056b209fa6e9b800db805fc6534154a72ddfa8ffc440b80e36d7c2"} Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.283933 4903 scope.go:117] "RemoveContainer" containerID="46862f7215c1a9b48c310930b526f2fdd66d093553da5af38eb7ac0bf13da949" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.283632 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6887c74856-vzjnd" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.285496 4903 generic.go:334] "Generic (PLEG): container finished" podID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerID="45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d" exitCode=0 Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.285551 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" event={"ID":"9db04f6f-b617-4ba4-8efc-64743910ba2a","Type":"ContainerDied","Data":"45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d"} Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.285577 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" event={"ID":"9db04f6f-b617-4ba4-8efc-64743910ba2a","Type":"ContainerDied","Data":"477a1553443f0d758700c943f93bd1d0f56aa9bba6fd592ca8e6778590bc9b4f"} Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.285696 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7544b59b75-jjtkw" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.317231 4903 scope.go:117] "RemoveContainer" containerID="45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.329385 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7544b59b75-jjtkw"] Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.338820 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7544b59b75-jjtkw"] Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.341560 4903 scope.go:117] "RemoveContainer" containerID="45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d" Jan 28 17:31:32 crc kubenswrapper[4903]: E0128 17:31:32.341999 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d\": container with ID starting with 45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d not found: ID does not exist" containerID="45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.342042 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d"} err="failed to get container status \"45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d\": rpc error: code = NotFound desc = could not find container \"45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d\": container with ID starting with 45cbc2d9f2e0de5eecf0956dd505f4b4a724fc2bb23de5de479c76b61269a69d not found: ID does not exist" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.347889 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6887c74856-vzjnd"] Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.356285 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6887c74856-vzjnd"] Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.427096 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" path="/var/lib/kubelet/pods/0d09cee0-da1d-4de4-bbe5-19c78b5fd58e/volumes" Jan 28 17:31:32 crc kubenswrapper[4903]: I0128 17:31:32.427751 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" path="/var/lib/kubelet/pods/9db04f6f-b617-4ba4-8efc-64743910ba2a/volumes" Jan 28 17:31:33 crc kubenswrapper[4903]: I0128 17:31:33.300847 4903 generic.go:334] "Generic (PLEG): container finished" podID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerID="91d199ea977810d2e63573cd3daf091df34f6e04e74c0670e3acde3d4d19088b" exitCode=0 Jan 28 17:31:33 crc kubenswrapper[4903]: I0128 17:31:33.300951 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795ddfcdd6-blwfr" event={"ID":"c15165ec-e5e2-4795-a054-b0ab4c3956bd","Type":"ContainerDied","Data":"91d199ea977810d2e63573cd3daf091df34f6e04e74c0670e3acde3d4d19088b"} Jan 28 17:31:33 crc kubenswrapper[4903]: I0128 17:31:33.643412 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 28 17:31:36 crc kubenswrapper[4903]: I0128 17:31:36.157464 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:37 crc kubenswrapper[4903]: I0128 17:31:37.586431 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6496f95c59-bgw2h" Jan 28 17:31:37 crc kubenswrapper[4903]: I0128 17:31:37.661742 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5b64cbf4cb-phdrn" Jan 28 17:31:37 crc kubenswrapper[4903]: I0128 17:31:37.663583 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54778bc456-8clgb"] Jan 28 17:31:37 crc kubenswrapper[4903]: I0128 17:31:37.767186 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-79ffcb9bd-xdqvw"] Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.224484 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.244570 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data-custom\") pod \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.244717 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data\") pod \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.244824 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjsl4\" (UniqueName: \"kubernetes.io/projected/c8c6dd3b-ec36-4327-94f1-252a820fe38d-kube-api-access-gjsl4\") pod \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.244963 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-combined-ca-bundle\") pod \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\" (UID: \"c8c6dd3b-ec36-4327-94f1-252a820fe38d\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.256173 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8c6dd3b-ec36-4327-94f1-252a820fe38d" (UID: "c8c6dd3b-ec36-4327-94f1-252a820fe38d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.260008 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c6dd3b-ec36-4327-94f1-252a820fe38d-kube-api-access-gjsl4" (OuterVolumeSpecName: "kube-api-access-gjsl4") pod "c8c6dd3b-ec36-4327-94f1-252a820fe38d" (UID: "c8c6dd3b-ec36-4327-94f1-252a820fe38d"). InnerVolumeSpecName "kube-api-access-gjsl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.286580 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8c6dd3b-ec36-4327-94f1-252a820fe38d" (UID: "c8c6dd3b-ec36-4327-94f1-252a820fe38d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.307399 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data" (OuterVolumeSpecName: "config-data") pod "c8c6dd3b-ec36-4327-94f1-252a820fe38d" (UID: "c8c6dd3b-ec36-4327-94f1-252a820fe38d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.349772 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-54778bc456-8clgb" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.349755 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-54778bc456-8clgb" event={"ID":"c8c6dd3b-ec36-4327-94f1-252a820fe38d","Type":"ContainerDied","Data":"44f4802938196e2bbd6d509314c2de8077f8713c0b6c07a7405ff0aa8d0163ad"} Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.349843 4903 scope.go:117] "RemoveContainer" containerID="83a38cbd4040fd5475848a5c7e59a53caa927252ad45dd6fd49e7de4ae562c39" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.350562 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.350700 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjsl4\" (UniqueName: \"kubernetes.io/projected/c8c6dd3b-ec36-4327-94f1-252a820fe38d-kube-api-access-gjsl4\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.351468 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.351501 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8c6dd3b-ec36-4327-94f1-252a820fe38d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.354175 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-79ffcb9bd-xdqvw" event={"ID":"269698b4-d594-4999-8db0-f29938cb9356","Type":"ContainerDied","Data":"0511403a1541dbc98d680907777c908c06f3da57c4443b92c380bebbfdf8e5dc"} Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.354217 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0511403a1541dbc98d680907777c908c06f3da57c4443b92c380bebbfdf8e5dc" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.411698 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.448851 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-54778bc456-8clgb"] Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.464842 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-54778bc456-8clgb"] Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.570094 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data\") pod \"269698b4-d594-4999-8db0-f29938cb9356\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.570197 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data-custom\") pod \"269698b4-d594-4999-8db0-f29938cb9356\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.570293 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-combined-ca-bundle\") pod \"269698b4-d594-4999-8db0-f29938cb9356\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.570331 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrzqc\" (UniqueName: \"kubernetes.io/projected/269698b4-d594-4999-8db0-f29938cb9356-kube-api-access-wrzqc\") pod \"269698b4-d594-4999-8db0-f29938cb9356\" (UID: \"269698b4-d594-4999-8db0-f29938cb9356\") " Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.573613 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "269698b4-d594-4999-8db0-f29938cb9356" (UID: "269698b4-d594-4999-8db0-f29938cb9356"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.573656 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269698b4-d594-4999-8db0-f29938cb9356-kube-api-access-wrzqc" (OuterVolumeSpecName: "kube-api-access-wrzqc") pod "269698b4-d594-4999-8db0-f29938cb9356" (UID: "269698b4-d594-4999-8db0-f29938cb9356"). InnerVolumeSpecName "kube-api-access-wrzqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.607280 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "269698b4-d594-4999-8db0-f29938cb9356" (UID: "269698b4-d594-4999-8db0-f29938cb9356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.629278 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data" (OuterVolumeSpecName: "config-data") pod "269698b4-d594-4999-8db0-f29938cb9356" (UID: "269698b4-d594-4999-8db0-f29938cb9356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.673437 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.673493 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.673508 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/269698b4-d594-4999-8db0-f29938cb9356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:38 crc kubenswrapper[4903]: I0128 17:31:38.673520 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrzqc\" (UniqueName: \"kubernetes.io/projected/269698b4-d594-4999-8db0-f29938cb9356-kube-api-access-wrzqc\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:39 crc kubenswrapper[4903]: I0128 17:31:39.363866 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-79ffcb9bd-xdqvw" Jan 28 17:31:39 crc kubenswrapper[4903]: I0128 17:31:39.400893 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-79ffcb9bd-xdqvw"] Jan 28 17:31:39 crc kubenswrapper[4903]: I0128 17:31:39.411656 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-79ffcb9bd-xdqvw"] Jan 28 17:31:40 crc kubenswrapper[4903]: I0128 17:31:40.423649 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269698b4-d594-4999-8db0-f29938cb9356" path="/var/lib/kubelet/pods/269698b4-d594-4999-8db0-f29938cb9356/volumes" Jan 28 17:31:40 crc kubenswrapper[4903]: I0128 17:31:40.424491 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" path="/var/lib/kubelet/pods/c8c6dd3b-ec36-4327-94f1-252a820fe38d/volumes" Jan 28 17:31:43 crc kubenswrapper[4903]: I0128 17:31:43.643055 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 28 17:31:44 crc kubenswrapper[4903]: I0128 17:31:44.145542 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-dcd69c9cc-m72v4" Jan 28 17:31:44 crc kubenswrapper[4903]: I0128 17:31:44.199356 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-979fbf544-pwp5h"] Jan 28 17:31:44 crc kubenswrapper[4903]: I0128 17:31:44.199944 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-979fbf544-pwp5h" podUID="8855b993-cee5-4a99-b881-0b8f8c04863a" containerName="heat-engine" containerID="cri-o://6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" gracePeriod=60 Jan 28 17:31:46 crc kubenswrapper[4903]: E0128 17:31:46.105657 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 17:31:46 crc kubenswrapper[4903]: E0128 17:31:46.108179 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 17:31:46 crc kubenswrapper[4903]: E0128 17:31:46.111734 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 17:31:46 crc kubenswrapper[4903]: E0128 17:31:46.111801 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-979fbf544-pwp5h" podUID="8855b993-cee5-4a99-b881-0b8f8c04863a" containerName="heat-engine" Jan 28 17:31:53 crc kubenswrapper[4903]: I0128 17:31:53.643105 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Jan 28 17:31:53 crc kubenswrapper[4903]: I0128 17:31:53.643541 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:31:56 crc kubenswrapper[4903]: E0128 17:31:56.104456 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691 is running failed: container process not found" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 17:31:56 crc kubenswrapper[4903]: E0128 17:31:56.105508 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691 is running failed: container process not found" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 17:31:56 crc kubenswrapper[4903]: E0128 17:31:56.105871 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691 is running failed: container process not found" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 17:31:56 crc kubenswrapper[4903]: E0128 17:31:56.105898 4903 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-979fbf544-pwp5h" podUID="8855b993-cee5-4a99-b881-0b8f8c04863a" containerName="heat-engine" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.214209 4903 scope.go:117] "RemoveContainer" containerID="71fc73ee29264dd8eeca7139026fd9d075d55afcfc01c074dbf1c0e44e8361c5" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.296147 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.314377 4903 scope.go:117] "RemoveContainer" containerID="ef9375532bc364927ef9c1f1d94906a9d569b7344f926b8590eb81b461632e56" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.337257 4903 scope.go:117] "RemoveContainer" containerID="1989441333e5947eb1c1166c9e14c17407bb506be6b65ad354406d981c98b1c7" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.367765 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data-custom\") pod \"8855b993-cee5-4a99-b881-0b8f8c04863a\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.367965 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-combined-ca-bundle\") pod \"8855b993-cee5-4a99-b881-0b8f8c04863a\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.368075 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t94vt\" (UniqueName: \"kubernetes.io/projected/8855b993-cee5-4a99-b881-0b8f8c04863a-kube-api-access-t94vt\") pod \"8855b993-cee5-4a99-b881-0b8f8c04863a\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.368180 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data\") pod \"8855b993-cee5-4a99-b881-0b8f8c04863a\" (UID: \"8855b993-cee5-4a99-b881-0b8f8c04863a\") " Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.375503 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8855b993-cee5-4a99-b881-0b8f8c04863a" (UID: "8855b993-cee5-4a99-b881-0b8f8c04863a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.375904 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8855b993-cee5-4a99-b881-0b8f8c04863a-kube-api-access-t94vt" (OuterVolumeSpecName: "kube-api-access-t94vt") pod "8855b993-cee5-4a99-b881-0b8f8c04863a" (UID: "8855b993-cee5-4a99-b881-0b8f8c04863a"). InnerVolumeSpecName "kube-api-access-t94vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.377194 4903 scope.go:117] "RemoveContainer" containerID="a41b7015b8eecadd87d1859945ee7f5ac9da3596d91808ae673232a4788df15b" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.402320 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8855b993-cee5-4a99-b881-0b8f8c04863a" (UID: "8855b993-cee5-4a99-b881-0b8f8c04863a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.455249 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data" (OuterVolumeSpecName: "config-data") pod "8855b993-cee5-4a99-b881-0b8f8c04863a" (UID: "8855b993-cee5-4a99-b881-0b8f8c04863a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.475640 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.475685 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t94vt\" (UniqueName: \"kubernetes.io/projected/8855b993-cee5-4a99-b881-0b8f8c04863a-kube-api-access-t94vt\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.475716 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.475727 4903 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8855b993-cee5-4a99-b881-0b8f8c04863a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.518956 4903 scope.go:117] "RemoveContainer" containerID="ec2ecd0d6532610a8091c5475dbe4cf4c0a21185e6ab7ad39ef6960bc446a65b" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.532951 4903 generic.go:334] "Generic (PLEG): container finished" podID="8855b993-cee5-4a99-b881-0b8f8c04863a" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" exitCode=0 Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.532988 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-979fbf544-pwp5h" event={"ID":"8855b993-cee5-4a99-b881-0b8f8c04863a","Type":"ContainerDied","Data":"6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691"} Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.533008 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-979fbf544-pwp5h" event={"ID":"8855b993-cee5-4a99-b881-0b8f8c04863a","Type":"ContainerDied","Data":"0530bd6db7e457776772c2efa8c26b8e01e8e442453f492095c516d523fa7109"} Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.533026 4903 scope.go:117] "RemoveContainer" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.533140 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-979fbf544-pwp5h" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.581267 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-979fbf544-pwp5h"] Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.581922 4903 scope.go:117] "RemoveContainer" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" Jan 28 17:31:56 crc kubenswrapper[4903]: E0128 17:31:56.587483 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691\": container with ID starting with 6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691 not found: ID does not exist" containerID="6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.587627 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691"} err="failed to get container status \"6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691\": rpc error: code = NotFound desc = could not find container \"6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691\": container with ID starting with 6e2d451c8a6b95d1a81d7c9654c2ffc9a291083f4e5b2f815e611699fbb42691 not found: ID does not exist" Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.594740 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-979fbf544-pwp5h"] Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.614028 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:31:56 crc kubenswrapper[4903]: I0128 17:31:56.614127 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:31:58 crc kubenswrapper[4903]: I0128 17:31:58.432353 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8855b993-cee5-4a99-b881-0b8f8c04863a" path="/var/lib/kubelet/pods/8855b993-cee5-4a99-b881-0b8f8c04863a/volumes" Jan 28 17:32:03 crc kubenswrapper[4903]: I0128 17:32:03.606382 4903 generic.go:334] "Generic (PLEG): container finished" podID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerID="9464b679a38844c837cd42a991491d3f7326090371f88a02dcfa71174cdb3d87" exitCode=137 Jan 28 17:32:03 crc kubenswrapper[4903]: I0128 17:32:03.607152 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795ddfcdd6-blwfr" event={"ID":"c15165ec-e5e2-4795-a054-b0ab4c3956bd","Type":"ContainerDied","Data":"9464b679a38844c837cd42a991491d3f7326090371f88a02dcfa71174cdb3d87"} Jan 28 17:32:03 crc kubenswrapper[4903]: I0128 17:32:03.930047 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.046136 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-tls-certs\") pod \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.046270 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15165ec-e5e2-4795-a054-b0ab4c3956bd-logs\") pod \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.046377 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-secret-key\") pod \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.046980 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15165ec-e5e2-4795-a054-b0ab4c3956bd-logs" (OuterVolumeSpecName: "logs") pod "c15165ec-e5e2-4795-a054-b0ab4c3956bd" (UID: "c15165ec-e5e2-4795-a054-b0ab4c3956bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.047178 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-scripts\") pod \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.047224 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-config-data\") pod \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.047294 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-combined-ca-bundle\") pod \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.047363 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq2f6\" (UniqueName: \"kubernetes.io/projected/c15165ec-e5e2-4795-a054-b0ab4c3956bd-kube-api-access-kq2f6\") pod \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\" (UID: \"c15165ec-e5e2-4795-a054-b0ab4c3956bd\") " Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.048046 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15165ec-e5e2-4795-a054-b0ab4c3956bd-logs\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.053283 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c15165ec-e5e2-4795-a054-b0ab4c3956bd" (UID: "c15165ec-e5e2-4795-a054-b0ab4c3956bd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.058891 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15165ec-e5e2-4795-a054-b0ab4c3956bd-kube-api-access-kq2f6" (OuterVolumeSpecName: "kube-api-access-kq2f6") pod "c15165ec-e5e2-4795-a054-b0ab4c3956bd" (UID: "c15165ec-e5e2-4795-a054-b0ab4c3956bd"). InnerVolumeSpecName "kube-api-access-kq2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.078129 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-config-data" (OuterVolumeSpecName: "config-data") pod "c15165ec-e5e2-4795-a054-b0ab4c3956bd" (UID: "c15165ec-e5e2-4795-a054-b0ab4c3956bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.079743 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-scripts" (OuterVolumeSpecName: "scripts") pod "c15165ec-e5e2-4795-a054-b0ab4c3956bd" (UID: "c15165ec-e5e2-4795-a054-b0ab4c3956bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.083366 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c15165ec-e5e2-4795-a054-b0ab4c3956bd" (UID: "c15165ec-e5e2-4795-a054-b0ab4c3956bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.107310 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c15165ec-e5e2-4795-a054-b0ab4c3956bd" (UID: "c15165ec-e5e2-4795-a054-b0ab4c3956bd"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.149654 4903 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.149723 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.149734 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c15165ec-e5e2-4795-a054-b0ab4c3956bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.149742 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.149754 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq2f6\" (UniqueName: \"kubernetes.io/projected/c15165ec-e5e2-4795-a054-b0ab4c3956bd-kube-api-access-kq2f6\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.149765 4903 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15165ec-e5e2-4795-a054-b0ab4c3956bd-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.619628 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795ddfcdd6-blwfr" event={"ID":"c15165ec-e5e2-4795-a054-b0ab4c3956bd","Type":"ContainerDied","Data":"3b827f44dc921a0195deb4e19013f57e445e87c10e6c295da61e3137fddfee01"} Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.619685 4903 scope.go:117] "RemoveContainer" containerID="91d199ea977810d2e63573cd3daf091df34f6e04e74c0670e3acde3d4d19088b" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.619731 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795ddfcdd6-blwfr" Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.668864 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-795ddfcdd6-blwfr"] Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.682645 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-795ddfcdd6-blwfr"] Jan 28 17:32:04 crc kubenswrapper[4903]: I0128 17:32:04.814347 4903 scope.go:117] "RemoveContainer" containerID="9464b679a38844c837cd42a991491d3f7326090371f88a02dcfa71174cdb3d87" Jan 28 17:32:06 crc kubenswrapper[4903]: I0128 17:32:06.424595 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" path="/var/lib/kubelet/pods/c15165ec-e5e2-4795-a054-b0ab4c3956bd/volumes" Jan 28 17:32:08 crc kubenswrapper[4903]: I0128 17:32:08.643364 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-795ddfcdd6-blwfr" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 17:32:20 crc kubenswrapper[4903]: I0128 17:32:20.046292 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mbrnw"] Jan 28 17:32:20 crc kubenswrapper[4903]: I0128 17:32:20.054707 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mbrnw"] Jan 28 17:32:20 crc kubenswrapper[4903]: I0128 17:32:20.428912 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b390eb3b-8f83-451c-8979-f640f892f3bd" path="/var/lib/kubelet/pods/b390eb3b-8f83-451c-8979-f640f892f3bd/volumes" Jan 28 17:32:21 crc kubenswrapper[4903]: I0128 17:32:21.042172 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f454-account-create-update-jvd5w"] Jan 28 17:32:21 crc kubenswrapper[4903]: I0128 17:32:21.053351 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f454-account-create-update-jvd5w"] Jan 28 17:32:22 crc kubenswrapper[4903]: I0128 17:32:22.425005 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ab7609-a704-4e48-bf27-52b61fca6c7d" path="/var/lib/kubelet/pods/79ab7609-a704-4e48-bf27-52b61fca6c7d/volumes" Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.614433 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.616163 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.616268 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.617608 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.617694 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" gracePeriod=600 Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.851091 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" exitCode=0 Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.851141 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f"} Jan 28 17:32:26 crc kubenswrapper[4903]: I0128 17:32:26.851180 4903 scope.go:117] "RemoveContainer" containerID="ee28cc3262e4fea1138e33197444030f45138047131bb3fe3acbf3798be6fb9a" Jan 28 17:32:27 crc kubenswrapper[4903]: E0128 17:32:27.248158 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:32:27 crc kubenswrapper[4903]: I0128 17:32:27.863451 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:32:27 crc kubenswrapper[4903]: E0128 17:32:27.864107 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.932290 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s"] Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934053 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8855b993-cee5-4a99-b881-0b8f8c04863a" containerName="heat-engine" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934074 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8855b993-cee5-4a99-b881-0b8f8c04863a" containerName="heat-engine" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934093 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934100 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934116 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934123 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934148 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934154 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934202 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269698b4-d594-4999-8db0-f29938cb9356" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934210 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="269698b4-d594-4999-8db0-f29938cb9356" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934229 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934236 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934255 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934261 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934289 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon-log" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934296 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon-log" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934580 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934621 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15165ec-e5e2-4795-a054-b0ab4c3956bd" containerName="horizon-log" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934633 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934689 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="269698b4-d594-4999-8db0-f29938cb9356" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934699 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8855b993-cee5-4a99-b881-0b8f8c04863a" containerName="heat-engine" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934717 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db04f6f-b617-4ba4-8efc-64743910ba2a" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934724 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d09cee0-da1d-4de4-bbe5-19c78b5fd58e" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: E0128 17:32:32.934943 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269698b4-d594-4999-8db0-f29938cb9356" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.934952 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="269698b4-d594-4999-8db0-f29938cb9356" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.935245 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="269698b4-d594-4999-8db0-f29938cb9356" containerName="heat-api" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.935274 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c6dd3b-ec36-4327-94f1-252a820fe38d" containerName="heat-cfnapi" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.936980 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.939134 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 17:32:32 crc kubenswrapper[4903]: I0128 17:32:32.941556 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s"] Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.031318 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jkxxx"] Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.038762 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.038903 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfxkr\" (UniqueName: \"kubernetes.io/projected/6f81b7db-a482-4152-beab-be67c6181c00-kube-api-access-wfxkr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.038957 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.047463 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jkxxx"] Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.140462 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfxkr\" (UniqueName: \"kubernetes.io/projected/6f81b7db-a482-4152-beab-be67c6181c00-kube-api-access-wfxkr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.140576 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.140658 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.141064 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.141123 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.161350 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfxkr\" (UniqueName: \"kubernetes.io/projected/6f81b7db-a482-4152-beab-be67c6181c00-kube-api-access-wfxkr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.258194 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.756572 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s"] Jan 28 17:32:33 crc kubenswrapper[4903]: I0128 17:32:33.918557 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" event={"ID":"6f81b7db-a482-4152-beab-be67c6181c00","Type":"ContainerStarted","Data":"f1dacc59d79f7eb5ff7ee238d52f450a145e37900a9b09050156d2541f14961f"} Jan 28 17:32:34 crc kubenswrapper[4903]: I0128 17:32:34.425099 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85c2276-594c-411a-a241-d17a6b2efe28" path="/var/lib/kubelet/pods/c85c2276-594c-411a-a241-d17a6b2efe28/volumes" Jan 28 17:32:34 crc kubenswrapper[4903]: I0128 17:32:34.932645 4903 generic.go:334] "Generic (PLEG): container finished" podID="6f81b7db-a482-4152-beab-be67c6181c00" containerID="1f2fd58ddd74ea0629c7122905618523c9f53c82cff5d7e45e7d0a2e43fb78e5" exitCode=0 Jan 28 17:32:34 crc kubenswrapper[4903]: I0128 17:32:34.932708 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" event={"ID":"6f81b7db-a482-4152-beab-be67c6181c00","Type":"ContainerDied","Data":"1f2fd58ddd74ea0629c7122905618523c9f53c82cff5d7e45e7d0a2e43fb78e5"} Jan 28 17:32:34 crc kubenswrapper[4903]: I0128 17:32:34.935820 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:32:37 crc kubenswrapper[4903]: I0128 17:32:37.990677 4903 generic.go:334] "Generic (PLEG): container finished" podID="6f81b7db-a482-4152-beab-be67c6181c00" containerID="ac0525bc95736cba56c7f2bc93fcafc882fc4ef29c90ae50441995c351df5c36" exitCode=0 Jan 28 17:32:37 crc kubenswrapper[4903]: I0128 17:32:37.990760 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" event={"ID":"6f81b7db-a482-4152-beab-be67c6181c00","Type":"ContainerDied","Data":"ac0525bc95736cba56c7f2bc93fcafc882fc4ef29c90ae50441995c351df5c36"} Jan 28 17:32:39 crc kubenswrapper[4903]: I0128 17:32:39.004053 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" event={"ID":"6f81b7db-a482-4152-beab-be67c6181c00","Type":"ContainerStarted","Data":"eebb5a1e404d79ce5dcc29abb6fb8120646a071bec8b4aa6283d21cadad4b3f7"} Jan 28 17:32:39 crc kubenswrapper[4903]: I0128 17:32:39.032757 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" podStartSLOduration=4.730890257 podStartE2EDuration="7.03273795s" podCreationTimestamp="2026-01-28 17:32:32 +0000 UTC" firstStartedPulling="2026-01-28 17:32:34.935492746 +0000 UTC m=+6427.211464247" lastFinishedPulling="2026-01-28 17:32:37.237340429 +0000 UTC m=+6429.513311940" observedRunningTime="2026-01-28 17:32:39.02034619 +0000 UTC m=+6431.296317701" watchObservedRunningTime="2026-01-28 17:32:39.03273795 +0000 UTC m=+6431.308709461" Jan 28 17:32:41 crc kubenswrapper[4903]: I0128 17:32:41.023031 4903 generic.go:334] "Generic (PLEG): container finished" podID="6f81b7db-a482-4152-beab-be67c6181c00" containerID="eebb5a1e404d79ce5dcc29abb6fb8120646a071bec8b4aa6283d21cadad4b3f7" exitCode=0 Jan 28 17:32:41 crc kubenswrapper[4903]: I0128 17:32:41.023111 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" event={"ID":"6f81b7db-a482-4152-beab-be67c6181c00","Type":"ContainerDied","Data":"eebb5a1e404d79ce5dcc29abb6fb8120646a071bec8b4aa6283d21cadad4b3f7"} Jan 28 17:32:41 crc kubenswrapper[4903]: I0128 17:32:41.414404 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:32:41 crc kubenswrapper[4903]: E0128 17:32:41.414748 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.385326 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.586872 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfxkr\" (UniqueName: \"kubernetes.io/projected/6f81b7db-a482-4152-beab-be67c6181c00-kube-api-access-wfxkr\") pod \"6f81b7db-a482-4152-beab-be67c6181c00\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.587136 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-bundle\") pod \"6f81b7db-a482-4152-beab-be67c6181c00\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.587182 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-util\") pod \"6f81b7db-a482-4152-beab-be67c6181c00\" (UID: \"6f81b7db-a482-4152-beab-be67c6181c00\") " Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.596434 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f81b7db-a482-4152-beab-be67c6181c00-kube-api-access-wfxkr" (OuterVolumeSpecName: "kube-api-access-wfxkr") pod "6f81b7db-a482-4152-beab-be67c6181c00" (UID: "6f81b7db-a482-4152-beab-be67c6181c00"). InnerVolumeSpecName "kube-api-access-wfxkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.597836 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-util" (OuterVolumeSpecName: "util") pod "6f81b7db-a482-4152-beab-be67c6181c00" (UID: "6f81b7db-a482-4152-beab-be67c6181c00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.689151 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfxkr\" (UniqueName: \"kubernetes.io/projected/6f81b7db-a482-4152-beab-be67c6181c00-kube-api-access-wfxkr\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.689186 4903 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-util\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.768458 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-bundle" (OuterVolumeSpecName: "bundle") pod "6f81b7db-a482-4152-beab-be67c6181c00" (UID: "6f81b7db-a482-4152-beab-be67c6181c00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:32:42 crc kubenswrapper[4903]: I0128 17:32:42.791730 4903 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f81b7db-a482-4152-beab-be67c6181c00-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:32:43 crc kubenswrapper[4903]: I0128 17:32:43.041190 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" event={"ID":"6f81b7db-a482-4152-beab-be67c6181c00","Type":"ContainerDied","Data":"f1dacc59d79f7eb5ff7ee238d52f450a145e37900a9b09050156d2541f14961f"} Jan 28 17:32:43 crc kubenswrapper[4903]: I0128 17:32:43.041254 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1dacc59d79f7eb5ff7ee238d52f450a145e37900a9b09050156d2541f14961f" Jan 28 17:32:43 crc kubenswrapper[4903]: I0128 17:32:43.041290 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087vc9s" Jan 28 17:32:56 crc kubenswrapper[4903]: I0128 17:32:56.413421 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:32:56 crc kubenswrapper[4903]: E0128 17:32:56.414181 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:32:56 crc kubenswrapper[4903]: I0128 17:32:56.814267 4903 scope.go:117] "RemoveContainer" containerID="64a34c6a61409b1cb98fc19f36014022c36d221c9cbdec582937b1b90eb2bf5a" Jan 28 17:32:56 crc kubenswrapper[4903]: I0128 17:32:56.963105 4903 scope.go:117] "RemoveContainer" containerID="a820c6f442d684793903714792e1d16b8db9334b1b03f4de0bbfc3ad32602fcf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.159015 4903 scope.go:117] "RemoveContainer" containerID="99fb111aef44fa4b9ba708bbc206771ad43ca4bbdf8f2fdc51c89273f326e1c6" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.294717 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w"] Jan 28 17:32:57 crc kubenswrapper[4903]: E0128 17:32:57.295181 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f81b7db-a482-4152-beab-be67c6181c00" containerName="pull" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.295203 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f81b7db-a482-4152-beab-be67c6181c00" containerName="pull" Jan 28 17:32:57 crc kubenswrapper[4903]: E0128 17:32:57.295215 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f81b7db-a482-4152-beab-be67c6181c00" containerName="util" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.295222 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f81b7db-a482-4152-beab-be67c6181c00" containerName="util" Jan 28 17:32:57 crc kubenswrapper[4903]: E0128 17:32:57.295244 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f81b7db-a482-4152-beab-be67c6181c00" containerName="extract" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.295250 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f81b7db-a482-4152-beab-be67c6181c00" containerName="extract" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.295448 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f81b7db-a482-4152-beab-be67c6181c00" containerName="extract" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.315908 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.320041 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-vc48c" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.320266 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.320370 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.322105 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.435832 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58zzw\" (UniqueName: \"kubernetes.io/projected/2fed8189-dd72-4654-830c-1cf670edd12b-kube-api-access-58zzw\") pod \"obo-prometheus-operator-68bc856cb9-gg64w\" (UID: \"2fed8189-dd72-4654-830c-1cf670edd12b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.485041 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.498253 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.509307 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.510960 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-jm9pt" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.522863 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.531271 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.541047 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58zzw\" (UniqueName: \"kubernetes.io/projected/2fed8189-dd72-4654-830c-1cf670edd12b-kube-api-access-58zzw\") pod \"obo-prometheus-operator-68bc856cb9-gg64w\" (UID: \"2fed8189-dd72-4654-830c-1cf670edd12b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.584800 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.614346 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58zzw\" (UniqueName: \"kubernetes.io/projected/2fed8189-dd72-4654-830c-1cf670edd12b-kube-api-access-58zzw\") pod \"obo-prometheus-operator-68bc856cb9-gg64w\" (UID: \"2fed8189-dd72-4654-830c-1cf670edd12b\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.626258 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.647252 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174e27ea-371f-47dd-9d61-61d0a51d2129-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf\" (UID: \"174e27ea-371f-47dd-9d61-61d0a51d2129\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.647332 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32707f0c-97fa-46b4-8d2a-ccd30d2eab81-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss\" (UID: \"32707f0c-97fa-46b4-8d2a-ccd30d2eab81\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.647382 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32707f0c-97fa-46b4-8d2a-ccd30d2eab81-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss\" (UID: \"32707f0c-97fa-46b4-8d2a-ccd30d2eab81\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.647449 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174e27ea-371f-47dd-9d61-61d0a51d2129-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf\" (UID: \"174e27ea-371f-47dd-9d61-61d0a51d2129\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.690310 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8rl9s"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.692302 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.701476 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8rl9s"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.706404 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.706902 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-nwj5m" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.727218 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.762344 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94gf2\" (UniqueName: \"kubernetes.io/projected/85527e1a-b844-463a-8ccb-5b4a7bcd53eb-kube-api-access-94gf2\") pod \"observability-operator-59bdc8b94-8rl9s\" (UID: \"85527e1a-b844-463a-8ccb-5b4a7bcd53eb\") " pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.762415 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174e27ea-371f-47dd-9d61-61d0a51d2129-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf\" (UID: \"174e27ea-371f-47dd-9d61-61d0a51d2129\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.762459 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32707f0c-97fa-46b4-8d2a-ccd30d2eab81-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss\" (UID: \"32707f0c-97fa-46b4-8d2a-ccd30d2eab81\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.762509 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32707f0c-97fa-46b4-8d2a-ccd30d2eab81-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss\" (UID: \"32707f0c-97fa-46b4-8d2a-ccd30d2eab81\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.762647 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174e27ea-371f-47dd-9d61-61d0a51d2129-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf\" (UID: \"174e27ea-371f-47dd-9d61-61d0a51d2129\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.762727 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/85527e1a-b844-463a-8ccb-5b4a7bcd53eb-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8rl9s\" (UID: \"85527e1a-b844-463a-8ccb-5b4a7bcd53eb\") " pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.769270 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32707f0c-97fa-46b4-8d2a-ccd30d2eab81-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss\" (UID: \"32707f0c-97fa-46b4-8d2a-ccd30d2eab81\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.770559 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32707f0c-97fa-46b4-8d2a-ccd30d2eab81-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss\" (UID: \"32707f0c-97fa-46b4-8d2a-ccd30d2eab81\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.770715 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174e27ea-371f-47dd-9d61-61d0a51d2129-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf\" (UID: \"174e27ea-371f-47dd-9d61-61d0a51d2129\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.770925 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174e27ea-371f-47dd-9d61-61d0a51d2129-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf\" (UID: \"174e27ea-371f-47dd-9d61-61d0a51d2129\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.780324 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-g48j5"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.781691 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.786097 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zmlj2" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.792377 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-g48j5"] Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.845266 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.866967 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94gf2\" (UniqueName: \"kubernetes.io/projected/85527e1a-b844-463a-8ccb-5b4a7bcd53eb-kube-api-access-94gf2\") pod \"observability-operator-59bdc8b94-8rl9s\" (UID: \"85527e1a-b844-463a-8ccb-5b4a7bcd53eb\") " pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.867056 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zk6r\" (UniqueName: \"kubernetes.io/projected/b3d2c60a-f859-4b7f-9acc-47d3155b1bef-kube-api-access-7zk6r\") pod \"perses-operator-5bf474d74f-g48j5\" (UID: \"b3d2c60a-f859-4b7f-9acc-47d3155b1bef\") " pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.867119 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3d2c60a-f859-4b7f-9acc-47d3155b1bef-openshift-service-ca\") pod \"perses-operator-5bf474d74f-g48j5\" (UID: \"b3d2c60a-f859-4b7f-9acc-47d3155b1bef\") " pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.867188 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/85527e1a-b844-463a-8ccb-5b4a7bcd53eb-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8rl9s\" (UID: \"85527e1a-b844-463a-8ccb-5b4a7bcd53eb\") " pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.873221 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/85527e1a-b844-463a-8ccb-5b4a7bcd53eb-observability-operator-tls\") pod \"observability-operator-59bdc8b94-8rl9s\" (UID: \"85527e1a-b844-463a-8ccb-5b4a7bcd53eb\") " pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.880147 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.928382 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94gf2\" (UniqueName: \"kubernetes.io/projected/85527e1a-b844-463a-8ccb-5b4a7bcd53eb-kube-api-access-94gf2\") pod \"observability-operator-59bdc8b94-8rl9s\" (UID: \"85527e1a-b844-463a-8ccb-5b4a7bcd53eb\") " pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.980235 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zk6r\" (UniqueName: \"kubernetes.io/projected/b3d2c60a-f859-4b7f-9acc-47d3155b1bef-kube-api-access-7zk6r\") pod \"perses-operator-5bf474d74f-g48j5\" (UID: \"b3d2c60a-f859-4b7f-9acc-47d3155b1bef\") " pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.980324 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3d2c60a-f859-4b7f-9acc-47d3155b1bef-openshift-service-ca\") pod \"perses-operator-5bf474d74f-g48j5\" (UID: \"b3d2c60a-f859-4b7f-9acc-47d3155b1bef\") " pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:57 crc kubenswrapper[4903]: I0128 17:32:57.981316 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3d2c60a-f859-4b7f-9acc-47d3155b1bef-openshift-service-ca\") pod \"perses-operator-5bf474d74f-g48j5\" (UID: \"b3d2c60a-f859-4b7f-9acc-47d3155b1bef\") " pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:58 crc kubenswrapper[4903]: I0128 17:32:58.014158 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zk6r\" (UniqueName: \"kubernetes.io/projected/b3d2c60a-f859-4b7f-9acc-47d3155b1bef-kube-api-access-7zk6r\") pod \"perses-operator-5bf474d74f-g48j5\" (UID: \"b3d2c60a-f859-4b7f-9acc-47d3155b1bef\") " pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:58 crc kubenswrapper[4903]: I0128 17:32:58.045392 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:32:58 crc kubenswrapper[4903]: I0128 17:32:58.167631 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:32:58 crc kubenswrapper[4903]: W0128 17:32:58.482355 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fed8189_dd72_4654_830c_1cf670edd12b.slice/crio-18ffe14ead9e9e06430a9062dd6fcee959b3bede585273ade858bbe6a264edbd WatchSource:0}: Error finding container 18ffe14ead9e9e06430a9062dd6fcee959b3bede585273ade858bbe6a264edbd: Status 404 returned error can't find the container with id 18ffe14ead9e9e06430a9062dd6fcee959b3bede585273ade858bbe6a264edbd Jan 28 17:32:58 crc kubenswrapper[4903]: I0128 17:32:58.484817 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w"] Jan 28 17:32:59 crc kubenswrapper[4903]: I0128 17:32:58.739772 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss"] Jan 28 17:32:59 crc kubenswrapper[4903]: W0128 17:32:58.763777 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32707f0c_97fa_46b4_8d2a_ccd30d2eab81.slice/crio-34ab9150d032bc4e8a72df343ffa81c07b85e7fdd52467e0f2b48398e1f741e8 WatchSource:0}: Error finding container 34ab9150d032bc4e8a72df343ffa81c07b85e7fdd52467e0f2b48398e1f741e8: Status 404 returned error can't find the container with id 34ab9150d032bc4e8a72df343ffa81c07b85e7fdd52467e0f2b48398e1f741e8 Jan 28 17:32:59 crc kubenswrapper[4903]: I0128 17:32:58.775027 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf"] Jan 28 17:32:59 crc kubenswrapper[4903]: I0128 17:32:58.934458 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-8rl9s"] Jan 28 17:32:59 crc kubenswrapper[4903]: W0128 17:32:58.949689 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85527e1a_b844_463a_8ccb_5b4a7bcd53eb.slice/crio-1a3e8f0a2a950d6a43b2a60894c6b608bae8f5d5773d8f3cc5e2a96cf5821199 WatchSource:0}: Error finding container 1a3e8f0a2a950d6a43b2a60894c6b608bae8f5d5773d8f3cc5e2a96cf5821199: Status 404 returned error can't find the container with id 1a3e8f0a2a950d6a43b2a60894c6b608bae8f5d5773d8f3cc5e2a96cf5821199 Jan 28 17:32:59 crc kubenswrapper[4903]: I0128 17:32:59.228713 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" event={"ID":"2fed8189-dd72-4654-830c-1cf670edd12b","Type":"ContainerStarted","Data":"18ffe14ead9e9e06430a9062dd6fcee959b3bede585273ade858bbe6a264edbd"} Jan 28 17:32:59 crc kubenswrapper[4903]: I0128 17:32:59.229869 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" event={"ID":"174e27ea-371f-47dd-9d61-61d0a51d2129","Type":"ContainerStarted","Data":"c5e12ca40d8125901447c90789782d45ecf4104218f6f41910302b1727314e57"} Jan 28 17:32:59 crc kubenswrapper[4903]: I0128 17:32:59.231066 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" event={"ID":"32707f0c-97fa-46b4-8d2a-ccd30d2eab81","Type":"ContainerStarted","Data":"34ab9150d032bc4e8a72df343ffa81c07b85e7fdd52467e0f2b48398e1f741e8"} Jan 28 17:32:59 crc kubenswrapper[4903]: I0128 17:32:59.232055 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" event={"ID":"85527e1a-b844-463a-8ccb-5b4a7bcd53eb","Type":"ContainerStarted","Data":"1a3e8f0a2a950d6a43b2a60894c6b608bae8f5d5773d8f3cc5e2a96cf5821199"} Jan 28 17:33:00 crc kubenswrapper[4903]: I0128 17:33:00.248543 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-g48j5"] Jan 28 17:33:01 crc kubenswrapper[4903]: I0128 17:33:01.307240 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-g48j5" event={"ID":"b3d2c60a-f859-4b7f-9acc-47d3155b1bef","Type":"ContainerStarted","Data":"f89ac1170ef469bac836af808f0677a909c83e9ede1954e1f4196e91a40b0fc8"} Jan 28 17:33:03 crc kubenswrapper[4903]: I0128 17:33:03.049674 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-73d1-account-create-update-8rtl2"] Jan 28 17:33:03 crc kubenswrapper[4903]: I0128 17:33:03.062598 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-898hf"] Jan 28 17:33:03 crc kubenswrapper[4903]: I0128 17:33:03.074229 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-73d1-account-create-update-8rtl2"] Jan 28 17:33:03 crc kubenswrapper[4903]: I0128 17:33:03.088181 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-898hf"] Jan 28 17:33:04 crc kubenswrapper[4903]: I0128 17:33:04.432711 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfe1130-fa3c-4b3b-9da7-4e564ae28488" path="/var/lib/kubelet/pods/0dfe1130-fa3c-4b3b-9da7-4e564ae28488/volumes" Jan 28 17:33:04 crc kubenswrapper[4903]: I0128 17:33:04.433611 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54c18d4-b547-431e-9e80-d077a19f9a20" path="/var/lib/kubelet/pods/d54c18d4-b547-431e-9e80-d077a19f9a20/volumes" Jan 28 17:33:07 crc kubenswrapper[4903]: I0128 17:33:07.413975 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:33:07 crc kubenswrapper[4903]: E0128 17:33:07.414887 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:33:10 crc kubenswrapper[4903]: I0128 17:33:10.036674 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-97wf9"] Jan 28 17:33:10 crc kubenswrapper[4903]: I0128 17:33:10.056576 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-97wf9"] Jan 28 17:33:10 crc kubenswrapper[4903]: I0128 17:33:10.433570 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05587f5-cb99-43be-9bdf-4c763735c0da" path="/var/lib/kubelet/pods/b05587f5-cb99-43be-9bdf-4c763735c0da/volumes" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.478022 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" event={"ID":"85527e1a-b844-463a-8ccb-5b4a7bcd53eb","Type":"ContainerStarted","Data":"5c8729a5ab4ba414075aa5bba938e8986e58963c3549ac40081187173dccdfd1"} Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.480166 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.481374 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-g48j5" event={"ID":"b3d2c60a-f859-4b7f-9acc-47d3155b1bef","Type":"ContainerStarted","Data":"9061b3243284eb3275e1d8a07b8f33535757cedcfbe22a6a96ed5a8836ca6586"} Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.481716 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.484117 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" event={"ID":"2fed8189-dd72-4654-830c-1cf670edd12b","Type":"ContainerStarted","Data":"61845a6b5fb983475bbeb7a385a8c865b4cc4641cd33480dfa6decc3bde1999d"} Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.486749 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.487886 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" event={"ID":"174e27ea-371f-47dd-9d61-61d0a51d2129","Type":"ContainerStarted","Data":"52dee8117ba2cc65482b193ae2e6006eff47c51cb6315e9a2fcf3e97b9c4b5a7"} Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.489550 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" event={"ID":"32707f0c-97fa-46b4-8d2a-ccd30d2eab81","Type":"ContainerStarted","Data":"49f6214e580e22533d867816f5e227d75e6f4d810940b1e5d91484ca29c7edad"} Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.504700 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-8rl9s" podStartSLOduration=3.226871334 podStartE2EDuration="16.504681179s" podCreationTimestamp="2026-01-28 17:32:57 +0000 UTC" firstStartedPulling="2026-01-28 17:32:58.952468603 +0000 UTC m=+6451.228440114" lastFinishedPulling="2026-01-28 17:33:12.230278458 +0000 UTC m=+6464.506249959" observedRunningTime="2026-01-28 17:33:13.500979881 +0000 UTC m=+6465.776951382" watchObservedRunningTime="2026-01-28 17:33:13.504681179 +0000 UTC m=+6465.780652690" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.557925 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-l6nmf" podStartSLOduration=3.080008325 podStartE2EDuration="16.557906119s" podCreationTimestamp="2026-01-28 17:32:57 +0000 UTC" firstStartedPulling="2026-01-28 17:32:58.800825767 +0000 UTC m=+6451.076797278" lastFinishedPulling="2026-01-28 17:33:12.278723561 +0000 UTC m=+6464.554695072" observedRunningTime="2026-01-28 17:33:13.544590594 +0000 UTC m=+6465.820562105" watchObservedRunningTime="2026-01-28 17:33:13.557906119 +0000 UTC m=+6465.833877630" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.677376 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gg64w" podStartSLOduration=2.861328381 podStartE2EDuration="16.677348866s" podCreationTimestamp="2026-01-28 17:32:57 +0000 UTC" firstStartedPulling="2026-01-28 17:32:58.490756004 +0000 UTC m=+6450.766727515" lastFinishedPulling="2026-01-28 17:33:12.306776489 +0000 UTC m=+6464.582748000" observedRunningTime="2026-01-28 17:33:13.636768824 +0000 UTC m=+6465.912740345" watchObservedRunningTime="2026-01-28 17:33:13.677348866 +0000 UTC m=+6465.953320377" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.684696 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69f5b495bb-xrsss" podStartSLOduration=3.22371205 podStartE2EDuration="16.684671911s" podCreationTimestamp="2026-01-28 17:32:57 +0000 UTC" firstStartedPulling="2026-01-28 17:32:58.771768632 +0000 UTC m=+6451.047740143" lastFinishedPulling="2026-01-28 17:33:12.232728493 +0000 UTC m=+6464.508700004" observedRunningTime="2026-01-28 17:33:13.65760317 +0000 UTC m=+6465.933574691" watchObservedRunningTime="2026-01-28 17:33:13.684671911 +0000 UTC m=+6465.960643422" Jan 28 17:33:13 crc kubenswrapper[4903]: I0128 17:33:13.712137 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-g48j5" podStartSLOduration=4.774111825 podStartE2EDuration="16.712115883s" podCreationTimestamp="2026-01-28 17:32:57 +0000 UTC" firstStartedPulling="2026-01-28 17:33:00.29230122 +0000 UTC m=+6452.568272731" lastFinishedPulling="2026-01-28 17:33:12.230305278 +0000 UTC m=+6464.506276789" observedRunningTime="2026-01-28 17:33:13.688292327 +0000 UTC m=+6465.964263838" watchObservedRunningTime="2026-01-28 17:33:13.712115883 +0000 UTC m=+6465.988087394" Jan 28 17:33:18 crc kubenswrapper[4903]: I0128 17:33:18.170888 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-g48j5" Jan 28 17:33:18 crc kubenswrapper[4903]: I0128 17:33:18.421006 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:33:18 crc kubenswrapper[4903]: E0128 17:33:18.421437 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:33:20 crc kubenswrapper[4903]: I0128 17:33:20.970716 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:20 crc kubenswrapper[4903]: I0128 17:33:20.971598 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" containerName="openstackclient" containerID="cri-o://b7d6438b08337c0639030e73a28b39c4fd9de920bd1f086a6698676889bc5677" gracePeriod=2 Jan 28 17:33:20 crc kubenswrapper[4903]: I0128 17:33:20.988906 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.072614 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:21 crc kubenswrapper[4903]: E0128 17:33:21.073138 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" containerName="openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.073163 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" containerName="openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.073438 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" containerName="openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.074313 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.089606 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" podUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.129723 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.156297 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:21 crc kubenswrapper[4903]: E0128 17:33:21.157270 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-9ml82 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.165598 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.165688 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config-secret\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.165735 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.165953 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ml82\" (UniqueName: \"kubernetes.io/projected/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-kube-api-access-9ml82\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.170612 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.202600 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.204363 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.210597 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.229578 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.267819 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ml82\" (UniqueName: \"kubernetes.io/projected/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-kube-api-access-9ml82\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.267913 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.267962 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.268038 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config-secret\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.268078 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.268117 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config-secret\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.268144 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r4t8\" (UniqueName: \"kubernetes.io/projected/23ed4382-5fbe-42d4-8eca-139726609cdf-kube-api-access-9r4t8\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.268171 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.269330 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: E0128 17:33:21.273290 4903 projected.go:194] Error preparing data for projected volume kube-api-access-9ml82 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (38853ea9-85d0-42f7-a59c-c58f0a3eb02e) does not match the UID in record. The object might have been deleted and then recreated Jan 28 17:33:21 crc kubenswrapper[4903]: E0128 17:33:21.273351 4903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-kube-api-access-9ml82 podName:38853ea9-85d0-42f7-a59c-c58f0a3eb02e nodeName:}" failed. No retries permitted until 2026-01-28 17:33:21.773332209 +0000 UTC m=+6474.049303720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9ml82" (UniqueName: "kubernetes.io/projected/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-kube-api-access-9ml82") pod "openstackclient" (UID: "38853ea9-85d0-42f7-a59c-c58f0a3eb02e") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (38853ea9-85d0-42f7-a59c-c58f0a3eb02e) does not match the UID in record. The object might have been deleted and then recreated Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.279358 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config-secret\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.281224 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.370860 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r4t8\" (UniqueName: \"kubernetes.io/projected/23ed4382-5fbe-42d4-8eca-139726609cdf-kube-api-access-9r4t8\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.371016 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.371057 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.371116 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config-secret\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.374793 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.375557 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config-secret\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.394210 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.421837 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.423240 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.428388 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xbnft" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.431382 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r4t8\" (UniqueName: \"kubernetes.io/projected/23ed4382-5fbe-42d4-8eca-139726609cdf-kube-api-access-9r4t8\") pod \"openstackclient\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.460038 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.495340 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpkf\" (UniqueName: \"kubernetes.io/projected/7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc-kube-api-access-5xpkf\") pod \"kube-state-metrics-0\" (UID: \"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc\") " pod="openstack/kube-state-metrics-0" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.570411 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.599506 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpkf\" (UniqueName: \"kubernetes.io/projected/7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc-kube-api-access-5xpkf\") pod \"kube-state-metrics-0\" (UID: \"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc\") " pod="openstack/kube-state-metrics-0" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.658897 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.664719 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpkf\" (UniqueName: \"kubernetes.io/projected/7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc-kube-api-access-5xpkf\") pod \"kube-state-metrics-0\" (UID: \"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc\") " pod="openstack/kube-state-metrics-0" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.664763 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.688793 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.703018 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.703259 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config-secret\") pod \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.703332 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-combined-ca-bundle\") pod \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.703422 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config\") pod \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\" (UID: \"38853ea9-85d0-42f7-a59c-c58f0a3eb02e\") " Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.703891 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ml82\" (UniqueName: \"kubernetes.io/projected/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-kube-api-access-9ml82\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.704298 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "38853ea9-85d0-42f7-a59c-c58f0a3eb02e" (UID: "38853ea9-85d0-42f7-a59c-c58f0a3eb02e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.713785 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "38853ea9-85d0-42f7-a59c-c58f0a3eb02e" (UID: "38853ea9-85d0-42f7-a59c-c58f0a3eb02e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.713935 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38853ea9-85d0-42f7-a59c-c58f0a3eb02e" (UID: "38853ea9-85d0-42f7-a59c-c58f0a3eb02e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.809401 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.809435 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.809471 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38853ea9-85d0-42f7-a59c-c58f0a3eb02e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:21 crc kubenswrapper[4903]: I0128 17:33:21.870129 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.214651 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.218141 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.231835 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.232423 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-7lwj6" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.232554 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.232691 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.232718 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.235991 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.321743 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhpz4\" (UniqueName: \"kubernetes.io/projected/3b9e657e-0edf-4f4e-be45-a79f5bed428c-kube-api-access-zhpz4\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.321803 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/3b9e657e-0edf-4f4e-be45-a79f5bed428c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.321833 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.321914 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.321966 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3b9e657e-0edf-4f4e-be45-a79f5bed428c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.321991 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.322011 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3b9e657e-0edf-4f4e-be45-a79f5bed428c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.425017 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhpz4\" (UniqueName: \"kubernetes.io/projected/3b9e657e-0edf-4f4e-be45-a79f5bed428c-kube-api-access-zhpz4\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.425337 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/3b9e657e-0edf-4f4e-be45-a79f5bed428c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.425364 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.425444 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.425498 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3b9e657e-0edf-4f4e-be45-a79f5bed428c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.425545 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.425569 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3b9e657e-0edf-4f4e-be45-a79f5bed428c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.435891 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3b9e657e-0edf-4f4e-be45-a79f5bed428c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.437365 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/3b9e657e-0edf-4f4e-be45-a79f5bed428c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.442306 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.446904 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3b9e657e-0edf-4f4e-be45-a79f5bed428c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.464022 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhpz4\" (UniqueName: \"kubernetes.io/projected/3b9e657e-0edf-4f4e-be45-a79f5bed428c-kube-api-access-zhpz4\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.470682 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.477007 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" path="/var/lib/kubelet/pods/38853ea9-85d0-42f7-a59c-c58f0a3eb02e/volumes" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.489986 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/3b9e657e-0edf-4f4e-be45-a79f5bed428c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"3b9e657e-0edf-4f4e-be45-a79f5bed428c\") " pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.633608 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.679864 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.692015 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.764050 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.795656 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="38853ea9-85d0-42f7-a59c-c58f0a3eb02e" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.877714 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.881271 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.889044 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.889277 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.889437 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.889663 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.889819 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.889948 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.890202 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-6dkjl" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.890364 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 17:33:22 crc kubenswrapper[4903]: I0128 17:33:22.953903 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.011607 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.046931 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.046977 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047000 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047044 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwm4\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-kube-api-access-bwwm4\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047066 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047105 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-config\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047126 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/153f5373-4b00-4d8f-9817-86f0819f1146-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047153 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047213 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.047241 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.148940 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149347 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149435 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149465 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149498 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149607 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwm4\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-kube-api-access-bwwm4\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149641 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149706 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-config\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149742 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/153f5373-4b00-4d8f-9817-86f0819f1146-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.149800 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.151496 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.153642 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.155240 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.203451 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-config\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.203584 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwm4\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-kube-api-access-bwwm4\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.204985 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.205734 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/153f5373-4b00-4d8f-9817-86f0819f1146-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.206269 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.208639 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.209190 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.209254 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/058d63b9304d69aa417f3799d57c2be739525036846bc0f19bc7858ff004b3fe/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.325815 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.503549 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.537021 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.694590 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"3b9e657e-0edf-4f4e-be45-a79f5bed428c","Type":"ContainerStarted","Data":"cf95ac164864f3f665d3dbefe37c6a0e6f2900ae73bf07b869fc5225207e4b71"} Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.695833 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"23ed4382-5fbe-42d4-8eca-139726609cdf","Type":"ContainerStarted","Data":"906596ee2afa98ffd41f542762bbd9b00ffca26308a0a9b5707990ca7300d088"} Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.697456 4903 generic.go:334] "Generic (PLEG): container finished" podID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" containerID="b7d6438b08337c0639030e73a28b39c4fd9de920bd1f086a6698676889bc5677" exitCode=137 Jan 28 17:33:23 crc kubenswrapper[4903]: I0128 17:33:23.698696 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc","Type":"ContainerStarted","Data":"444c3f55dfcf6947f577dcf5fb1bbc4b4477ce95ff61e3a4181212affae72d41"} Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.097428 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.100341 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.182646 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config-secret\") pod \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.182777 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config\") pod \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.182966 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8lll\" (UniqueName: \"kubernetes.io/projected/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-kube-api-access-v8lll\") pod \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.183155 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-combined-ca-bundle\") pod \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\" (UID: \"96647ffd-a0c7-46f7-94f7-3ad08ae5de09\") " Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.207812 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-kube-api-access-v8lll" (OuterVolumeSpecName: "kube-api-access-v8lll") pod "96647ffd-a0c7-46f7-94f7-3ad08ae5de09" (UID: "96647ffd-a0c7-46f7-94f7-3ad08ae5de09"). InnerVolumeSpecName "kube-api-access-v8lll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.231048 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "96647ffd-a0c7-46f7-94f7-3ad08ae5de09" (UID: "96647ffd-a0c7-46f7-94f7-3ad08ae5de09"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.239673 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96647ffd-a0c7-46f7-94f7-3ad08ae5de09" (UID: "96647ffd-a0c7-46f7-94f7-3ad08ae5de09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.294028 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.294070 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.294084 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8lll\" (UniqueName: \"kubernetes.io/projected/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-kube-api-access-v8lll\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.299517 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "96647ffd-a0c7-46f7-94f7-3ad08ae5de09" (UID: "96647ffd-a0c7-46f7-94f7-3ad08ae5de09"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.396368 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/96647ffd-a0c7-46f7-94f7-3ad08ae5de09-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.427737 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96647ffd-a0c7-46f7-94f7-3ad08ae5de09" path="/var/lib/kubelet/pods/96647ffd-a0c7-46f7-94f7-3ad08ae5de09/volumes" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.714254 4903 scope.go:117] "RemoveContainer" containerID="b7d6438b08337c0639030e73a28b39c4fd9de920bd1f086a6698676889bc5677" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.714433 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.718301 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerStarted","Data":"1490cf69d5b1f3edb19c7528b7d6fb794ff5774df1247d702d7351229f236795"} Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.724213 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"23ed4382-5fbe-42d4-8eca-139726609cdf","Type":"ContainerStarted","Data":"1f9147da7b7d4a4b2f70ea563917512657415fe25368612e72f11d409e6682d5"} Jan 28 17:33:24 crc kubenswrapper[4903]: I0128 17:33:24.755850 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.755829082 podStartE2EDuration="3.755829082s" podCreationTimestamp="2026-01-28 17:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:33:24.739794915 +0000 UTC m=+6477.015766446" watchObservedRunningTime="2026-01-28 17:33:24.755829082 +0000 UTC m=+6477.031800593" Jan 28 17:33:25 crc kubenswrapper[4903]: I0128 17:33:25.737201 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc","Type":"ContainerStarted","Data":"0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae"} Jan 28 17:33:25 crc kubenswrapper[4903]: I0128 17:33:25.737605 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 17:33:25 crc kubenswrapper[4903]: I0128 17:33:25.787323 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.235459608 podStartE2EDuration="4.786943862s" podCreationTimestamp="2026-01-28 17:33:21 +0000 UTC" firstStartedPulling="2026-01-28 17:33:23.039548521 +0000 UTC m=+6475.315520032" lastFinishedPulling="2026-01-28 17:33:24.591032775 +0000 UTC m=+6476.867004286" observedRunningTime="2026-01-28 17:33:25.754189689 +0000 UTC m=+6478.030161200" watchObservedRunningTime="2026-01-28 17:33:25.786943862 +0000 UTC m=+6478.062915373" Jan 28 17:33:30 crc kubenswrapper[4903]: I0128 17:33:30.805149 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerStarted","Data":"854d3fc978d3df08e68795789de7dfb52aefaf72dab9d36a1fca063a0b21dc84"} Jan 28 17:33:30 crc kubenswrapper[4903]: I0128 17:33:30.808103 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"3b9e657e-0edf-4f4e-be45-a79f5bed428c","Type":"ContainerStarted","Data":"80c816a268b0fa7cb78a56eaec1b4d642e6922e46c6d6a41eeb52cca3d2ad7b3"} Jan 28 17:33:31 crc kubenswrapper[4903]: I0128 17:33:31.878000 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 17:33:32 crc kubenswrapper[4903]: I0128 17:33:32.414455 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:33:32 crc kubenswrapper[4903]: E0128 17:33:32.415201 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:33:37 crc kubenswrapper[4903]: I0128 17:33:37.877846 4903 generic.go:334] "Generic (PLEG): container finished" podID="153f5373-4b00-4d8f-9817-86f0819f1146" containerID="854d3fc978d3df08e68795789de7dfb52aefaf72dab9d36a1fca063a0b21dc84" exitCode=0 Jan 28 17:33:37 crc kubenswrapper[4903]: I0128 17:33:37.877918 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerDied","Data":"854d3fc978d3df08e68795789de7dfb52aefaf72dab9d36a1fca063a0b21dc84"} Jan 28 17:33:37 crc kubenswrapper[4903]: I0128 17:33:37.883179 4903 generic.go:334] "Generic (PLEG): container finished" podID="3b9e657e-0edf-4f4e-be45-a79f5bed428c" containerID="80c816a268b0fa7cb78a56eaec1b4d642e6922e46c6d6a41eeb52cca3d2ad7b3" exitCode=0 Jan 28 17:33:37 crc kubenswrapper[4903]: I0128 17:33:37.883226 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"3b9e657e-0edf-4f4e-be45-a79f5bed428c","Type":"ContainerDied","Data":"80c816a268b0fa7cb78a56eaec1b4d642e6922e46c6d6a41eeb52cca3d2ad7b3"} Jan 28 17:33:41 crc kubenswrapper[4903]: I0128 17:33:41.929266 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"3b9e657e-0edf-4f4e-be45-a79f5bed428c","Type":"ContainerStarted","Data":"14c59390d02a74e0eb2f185c0dc6fd816bbbd0f823dc38bc5e380f7a519654de"} Jan 28 17:33:44 crc kubenswrapper[4903]: I0128 17:33:44.965062 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"3b9e657e-0edf-4f4e-be45-a79f5bed428c","Type":"ContainerStarted","Data":"3820838b42faeb9b293f625d0bef3aae88baf79d8db619e58dc8651949132cd5"} Jan 28 17:33:44 crc kubenswrapper[4903]: I0128 17:33:44.965728 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:44 crc kubenswrapper[4903]: I0128 17:33:44.969511 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 28 17:33:44 crc kubenswrapper[4903]: I0128 17:33:44.996734 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.896007654 podStartE2EDuration="22.996709722s" podCreationTimestamp="2026-01-28 17:33:22 +0000 UTC" firstStartedPulling="2026-01-28 17:33:23.531713323 +0000 UTC m=+6475.807684834" lastFinishedPulling="2026-01-28 17:33:40.632415391 +0000 UTC m=+6492.908386902" observedRunningTime="2026-01-28 17:33:44.986195792 +0000 UTC m=+6497.262167313" watchObservedRunningTime="2026-01-28 17:33:44.996709722 +0000 UTC m=+6497.272681233" Jan 28 17:33:47 crc kubenswrapper[4903]: I0128 17:33:47.413690 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:33:47 crc kubenswrapper[4903]: E0128 17:33:47.414314 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:33:47 crc kubenswrapper[4903]: I0128 17:33:47.996850 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerStarted","Data":"84de4d0b5526fa3374f17f173d51fd36570c4b7c9d2258f127c31e8c0df0510c"} Jan 28 17:33:52 crc kubenswrapper[4903]: I0128 17:33:52.040035 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerStarted","Data":"63386b5e05e4c02f639a16bad5d42411a9af34bc6671f8426c737db5a5ca1608"} Jan 28 17:33:55 crc kubenswrapper[4903]: I0128 17:33:55.069572 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerStarted","Data":"e56c47501f5ec3afad7841aaafff5c518a4a5af49166e41c44e47f6f24b3035d"} Jan 28 17:33:55 crc kubenswrapper[4903]: I0128 17:33:55.100158 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.8395116849999997 podStartE2EDuration="34.100115633s" podCreationTimestamp="2026-01-28 17:33:21 +0000 UTC" firstStartedPulling="2026-01-28 17:33:24.14328212 +0000 UTC m=+6476.419253621" lastFinishedPulling="2026-01-28 17:33:54.403886058 +0000 UTC m=+6506.679857569" observedRunningTime="2026-01-28 17:33:55.09588541 +0000 UTC m=+6507.371856921" watchObservedRunningTime="2026-01-28 17:33:55.100115633 +0000 UTC m=+6507.376087144" Jan 28 17:33:57 crc kubenswrapper[4903]: I0128 17:33:57.508918 4903 scope.go:117] "RemoveContainer" containerID="121ccc380a35dac833a3c20eeed9933563bfe8ada54941c0f6641c80e0751d22" Jan 28 17:33:57 crc kubenswrapper[4903]: I0128 17:33:57.536310 4903 scope.go:117] "RemoveContainer" containerID="25c04baf658d0c8c3a7ce474272c54534c9cea231fe9b6dac28b47bb213d7f18" Jan 28 17:33:57 crc kubenswrapper[4903]: I0128 17:33:57.570745 4903 scope.go:117] "RemoveContainer" containerID="01378104c04ba7f6972c481a3b5933c8b63ab01fa4b77fc64ff9e5bd9a1b3cd8" Jan 28 17:33:57 crc kubenswrapper[4903]: I0128 17:33:57.626437 4903 scope.go:117] "RemoveContainer" containerID="9efa41887320f9581d3e54f25b926d7296397e61a57e173acf9fec3970722af7" Jan 28 17:33:57 crc kubenswrapper[4903]: I0128 17:33:57.671991 4903 scope.go:117] "RemoveContainer" containerID="3a751e8e4561fc579480f0746f38341c73e8d20b82d7d62c29f630309edc816e" Jan 28 17:33:57 crc kubenswrapper[4903]: I0128 17:33:57.721542 4903 scope.go:117] "RemoveContainer" containerID="88872ba05ff6683e838f4f44dd63e95937ed028220b451c609a2e55b8f21edab" Jan 28 17:33:57 crc kubenswrapper[4903]: I0128 17:33:57.767474 4903 scope.go:117] "RemoveContainer" containerID="784dbbe4c3d5afcc264b5c7a83e2b6567de35d190193e3e8b68f1cb22b81d1b4" Jan 28 17:33:58 crc kubenswrapper[4903]: I0128 17:33:58.541916 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:00 crc kubenswrapper[4903]: I0128 17:34:00.422678 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:34:00 crc kubenswrapper[4903]: E0128 17:34:00.429618 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:34:00 crc kubenswrapper[4903]: I0128 17:34:00.996122 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.003303 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.006679 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.007806 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.010412 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.094218 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.094301 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.094362 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-log-httpd\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.094400 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7gk\" (UniqueName: \"kubernetes.io/projected/4a35b759-1510-4949-82eb-5a492d973fa7-kube-api-access-th7gk\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.094426 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-run-httpd\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.094593 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-config-data\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.094654 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-scripts\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.197091 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-log-httpd\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.197155 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7gk\" (UniqueName: \"kubernetes.io/projected/4a35b759-1510-4949-82eb-5a492d973fa7-kube-api-access-th7gk\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.197184 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-run-httpd\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.197231 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-config-data\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.197258 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-scripts\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.197605 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.197670 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.200211 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-log-httpd\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.201007 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-run-httpd\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.208198 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.208350 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-scripts\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.208358 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.217061 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-config-data\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.219421 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7gk\" (UniqueName: \"kubernetes.io/projected/4a35b759-1510-4949-82eb-5a492d973fa7-kube-api-access-th7gk\") pod \"ceilometer-0\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.329948 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:01 crc kubenswrapper[4903]: W0128 17:34:01.855252 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a35b759_1510_4949_82eb_5a492d973fa7.slice/crio-0bdbe9ba84a46560e596c3bf83e8d33ba398250a7ff23e659ca44e0101e0bc55 WatchSource:0}: Error finding container 0bdbe9ba84a46560e596c3bf83e8d33ba398250a7ff23e659ca44e0101e0bc55: Status 404 returned error can't find the container with id 0bdbe9ba84a46560e596c3bf83e8d33ba398250a7ff23e659ca44e0101e0bc55 Jan 28 17:34:01 crc kubenswrapper[4903]: I0128 17:34:01.859656 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:02 crc kubenswrapper[4903]: I0128 17:34:02.152443 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerStarted","Data":"0bdbe9ba84a46560e596c3bf83e8d33ba398250a7ff23e659ca44e0101e0bc55"} Jan 28 17:34:03 crc kubenswrapper[4903]: I0128 17:34:03.165143 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerStarted","Data":"c5ce3db2955609bba206be3d4472b6f62ffc527dd0668841a659fa7a723f23d9"} Jan 28 17:34:04 crc kubenswrapper[4903]: I0128 17:34:04.176315 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerStarted","Data":"a4463a829411f0bfb79101d27c0de91085475c8654cbc6cd53bba2edc1eb0e0b"} Jan 28 17:34:04 crc kubenswrapper[4903]: I0128 17:34:04.177143 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerStarted","Data":"5805cc99a9b8631a622cccbdddad9c20f241d9a0123967342a7d9a31080976d3"} Jan 28 17:34:06 crc kubenswrapper[4903]: I0128 17:34:06.196279 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerStarted","Data":"d05d17385154a9a5e9213a1eb50e367e9596883138cc195e81fc010f6b1331a3"} Jan 28 17:34:06 crc kubenswrapper[4903]: I0128 17:34:06.198081 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 17:34:06 crc kubenswrapper[4903]: I0128 17:34:06.219261 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.593020056 podStartE2EDuration="6.219241754s" podCreationTimestamp="2026-01-28 17:34:00 +0000 UTC" firstStartedPulling="2026-01-28 17:34:01.857642726 +0000 UTC m=+6514.133614237" lastFinishedPulling="2026-01-28 17:34:05.483864434 +0000 UTC m=+6517.759835935" observedRunningTime="2026-01-28 17:34:06.215643118 +0000 UTC m=+6518.491614649" watchObservedRunningTime="2026-01-28 17:34:06.219241754 +0000 UTC m=+6518.495213265" Jan 28 17:34:08 crc kubenswrapper[4903]: I0128 17:34:08.542508 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:08 crc kubenswrapper[4903]: I0128 17:34:08.544681 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:09 crc kubenswrapper[4903]: I0128 17:34:09.225556 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.045817 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kcm7v"] Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.056898 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kvc96"] Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.067167 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kcm7v"] Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.079062 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kvc96"] Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.445214 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac2962a-7b79-419a-a524-5d2b6b3d3a8b" path="/var/lib/kubelet/pods/7ac2962a-7b79-419a-a524-5d2b6b3d3a8b/volumes" Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.446319 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d122611d-0720-468d-8841-174e00f898fe" path="/var/lib/kubelet/pods/d122611d-0720-468d-8841-174e00f898fe/volumes" Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.906600 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.907181 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" containerName="openstackclient" containerID="cri-o://1f9147da7b7d4a4b2f70ea563917512657415fe25368612e72f11d409e6682d5" gracePeriod=2 Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.917978 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.969781 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 17:34:10 crc kubenswrapper[4903]: E0128 17:34:10.970190 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" containerName="openstackclient" Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.970208 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" containerName="openstackclient" Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.970411 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" containerName="openstackclient" Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.971059 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.983196 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:34:10 crc kubenswrapper[4903]: I0128 17:34:10.988907 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="23ed4382-5fbe-42d4-8eca-139726609cdf" podUID="8f1dc9cc-7637-42cf-a9b0-1b8d141a1534" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.021593 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzjz\" (UniqueName: \"kubernetes.io/projected/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-kube-api-access-mrzjz\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.021674 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.021711 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-openstack-config\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.021800 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.053395 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b894-account-create-update-dkrqc"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.079703 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-02f9-account-create-update-g5jhl"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.110505 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-47f4p"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.123622 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-02f9-account-create-update-g5jhl"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.125132 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.125184 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-openstack-config\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.125250 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.125381 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzjz\" (UniqueName: \"kubernetes.io/projected/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-kube-api-access-mrzjz\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.129256 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-openstack-config\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.139611 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b894-account-create-update-dkrqc"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.147278 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.155603 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-47f4p"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.178724 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.180732 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8855-account-create-update-jv4rw"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.198187 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzjz\" (UniqueName: \"kubernetes.io/projected/8f1dc9cc-7637-42cf-a9b0-1b8d141a1534-kube-api-access-mrzjz\") pod \"openstackclient\" (UID: \"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534\") " pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.222154 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8855-account-create-update-jv4rw"] Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.301953 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:34:11 crc kubenswrapper[4903]: I0128 17:34:11.866828 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.314558 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534","Type":"ContainerStarted","Data":"138f4be3a31ba666719f7200fb45d0b733f6f01b404cb03af9b485b0b22e79ad"} Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.314805 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f1dc9cc-7637-42cf-a9b0-1b8d141a1534","Type":"ContainerStarted","Data":"b869e39c49f49ec219885a4c4a96743a043f4f9ffa6b24807666052d81cd4d2e"} Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.336614 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.336595956 podStartE2EDuration="2.336595956s" podCreationTimestamp="2026-01-28 17:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:34:12.332202619 +0000 UTC m=+6524.608174130" watchObservedRunningTime="2026-01-28 17:34:12.336595956 +0000 UTC m=+6524.612567467" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.399812 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-pzbhp"] Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.401506 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.430865 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10367b1a-b989-4e5b-b159-de422134c172" path="/var/lib/kubelet/pods/10367b1a-b989-4e5b-b159-de422134c172/volumes" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.431436 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258fac9e-ef70-4e82-8767-1858cf6272b6" path="/var/lib/kubelet/pods/258fac9e-ef70-4e82-8767-1858cf6272b6/volumes" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.432057 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46797207-aaf7-442a-a249-caa3998a37cb" path="/var/lib/kubelet/pods/46797207-aaf7-442a-a249-caa3998a37cb/volumes" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.434219 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988c20f0-d6bf-4819-b2ba-4323f7a428af" path="/var/lib/kubelet/pods/988c20f0-d6bf-4819-b2ba-4323f7a428af/volumes" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.435065 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-pzbhp"] Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.502232 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nww6\" (UniqueName: \"kubernetes.io/projected/a4658f79-2284-4761-b715-0e0af88f2439-kube-api-access-5nww6\") pod \"aodh-db-create-pzbhp\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.502392 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4658f79-2284-4761-b715-0e0af88f2439-operator-scripts\") pod \"aodh-db-create-pzbhp\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.507948 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-a817-account-create-update-ztcqh"] Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.509271 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.511505 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.543071 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a817-account-create-update-ztcqh"] Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.604461 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4658f79-2284-4761-b715-0e0af88f2439-operator-scripts\") pod \"aodh-db-create-pzbhp\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.604796 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nww6\" (UniqueName: \"kubernetes.io/projected/a4658f79-2284-4761-b715-0e0af88f2439-kube-api-access-5nww6\") pod \"aodh-db-create-pzbhp\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.606405 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4658f79-2284-4761-b715-0e0af88f2439-operator-scripts\") pod \"aodh-db-create-pzbhp\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.626331 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nww6\" (UniqueName: \"kubernetes.io/projected/a4658f79-2284-4761-b715-0e0af88f2439-kube-api-access-5nww6\") pod \"aodh-db-create-pzbhp\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.707472 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29fp\" (UniqueName: \"kubernetes.io/projected/3af3e35a-a105-4812-9f41-c49343319188-kube-api-access-h29fp\") pod \"aodh-a817-account-create-update-ztcqh\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.707604 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af3e35a-a105-4812-9f41-c49343319188-operator-scripts\") pod \"aodh-a817-account-create-update-ztcqh\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.735654 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.809803 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h29fp\" (UniqueName: \"kubernetes.io/projected/3af3e35a-a105-4812-9f41-c49343319188-kube-api-access-h29fp\") pod \"aodh-a817-account-create-update-ztcqh\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.810271 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af3e35a-a105-4812-9f41-c49343319188-operator-scripts\") pod \"aodh-a817-account-create-update-ztcqh\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.811102 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af3e35a-a105-4812-9f41-c49343319188-operator-scripts\") pod \"aodh-a817-account-create-update-ztcqh\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.831723 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h29fp\" (UniqueName: \"kubernetes.io/projected/3af3e35a-a105-4812-9f41-c49343319188-kube-api-access-h29fp\") pod \"aodh-a817-account-create-update-ztcqh\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.841229 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.963549 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.964255 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="prometheus" containerID="cri-o://84de4d0b5526fa3374f17f173d51fd36570c4b7c9d2258f127c31e8c0df0510c" gracePeriod=600 Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.964630 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="config-reloader" containerID="cri-o://63386b5e05e4c02f639a16bad5d42411a9af34bc6671f8426c737db5a5ca1608" gracePeriod=600 Jan 28 17:34:12 crc kubenswrapper[4903]: I0128 17:34:12.964622 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="thanos-sidecar" containerID="cri-o://e56c47501f5ec3afad7841aaafff5c518a4a5af49166e41c44e47f6f24b3035d" gracePeriod=600 Jan 28 17:34:13 crc kubenswrapper[4903]: W0128 17:34:13.324716 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4658f79_2284_4761_b715_0e0af88f2439.slice/crio-097f5dcfe713349768e01b2ce350626f721ffb1524fcf37b132dac70a43668b6 WatchSource:0}: Error finding container 097f5dcfe713349768e01b2ce350626f721ffb1524fcf37b132dac70a43668b6: Status 404 returned error can't find the container with id 097f5dcfe713349768e01b2ce350626f721ffb1524fcf37b132dac70a43668b6 Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.326963 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-pzbhp"] Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.329249 4903 generic.go:334] "Generic (PLEG): container finished" podID="153f5373-4b00-4d8f-9817-86f0819f1146" containerID="e56c47501f5ec3afad7841aaafff5c518a4a5af49166e41c44e47f6f24b3035d" exitCode=0 Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.329282 4903 generic.go:334] "Generic (PLEG): container finished" podID="153f5373-4b00-4d8f-9817-86f0819f1146" containerID="63386b5e05e4c02f639a16bad5d42411a9af34bc6671f8426c737db5a5ca1608" exitCode=0 Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.329298 4903 generic.go:334] "Generic (PLEG): container finished" podID="153f5373-4b00-4d8f-9817-86f0819f1146" containerID="84de4d0b5526fa3374f17f173d51fd36570c4b7c9d2258f127c31e8c0df0510c" exitCode=0 Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.329316 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerDied","Data":"e56c47501f5ec3afad7841aaafff5c518a4a5af49166e41c44e47f6f24b3035d"} Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.329383 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerDied","Data":"63386b5e05e4c02f639a16bad5d42411a9af34bc6671f8426c737db5a5ca1608"} Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.329394 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerDied","Data":"84de4d0b5526fa3374f17f173d51fd36570c4b7c9d2258f127c31e8c0df0510c"} Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.333316 4903 generic.go:334] "Generic (PLEG): container finished" podID="23ed4382-5fbe-42d4-8eca-139726609cdf" containerID="1f9147da7b7d4a4b2f70ea563917512657415fe25368612e72f11d409e6682d5" exitCode=137 Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.334148 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="906596ee2afa98ffd41f542762bbd9b00ffca26308a0a9b5707990ca7300d088" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.383956 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.414763 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:34:13 crc kubenswrapper[4903]: E0128 17:34:13.417631 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.501511 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a817-account-create-update-ztcqh"] Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.526400 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config-secret\") pod \"23ed4382-5fbe-42d4-8eca-139726609cdf\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.526645 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-combined-ca-bundle\") pod \"23ed4382-5fbe-42d4-8eca-139726609cdf\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.526932 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r4t8\" (UniqueName: \"kubernetes.io/projected/23ed4382-5fbe-42d4-8eca-139726609cdf-kube-api-access-9r4t8\") pod \"23ed4382-5fbe-42d4-8eca-139726609cdf\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.526964 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config\") pod \"23ed4382-5fbe-42d4-8eca-139726609cdf\" (UID: \"23ed4382-5fbe-42d4-8eca-139726609cdf\") " Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.535728 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ed4382-5fbe-42d4-8eca-139726609cdf-kube-api-access-9r4t8" (OuterVolumeSpecName: "kube-api-access-9r4t8") pod "23ed4382-5fbe-42d4-8eca-139726609cdf" (UID: "23ed4382-5fbe-42d4-8eca-139726609cdf"). InnerVolumeSpecName "kube-api-access-9r4t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.542606 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.136:9090/-/ready\": dial tcp 10.217.1.136:9090: connect: connection refused" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.555031 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "23ed4382-5fbe-42d4-8eca-139726609cdf" (UID: "23ed4382-5fbe-42d4-8eca-139726609cdf"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.580741 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23ed4382-5fbe-42d4-8eca-139726609cdf" (UID: "23ed4382-5fbe-42d4-8eca-139726609cdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.588806 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "23ed4382-5fbe-42d4-8eca-139726609cdf" (UID: "23ed4382-5fbe-42d4-8eca-139726609cdf"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.632070 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.632109 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ed4382-5fbe-42d4-8eca-139726609cdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.632121 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r4t8\" (UniqueName: \"kubernetes.io/projected/23ed4382-5fbe-42d4-8eca-139726609cdf-kube-api-access-9r4t8\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.632132 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23ed4382-5fbe-42d4-8eca-139726609cdf-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:13 crc kubenswrapper[4903]: I0128 17:34:13.917056 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.039875 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-web-config\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.039944 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-config\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.040028 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-2\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.040064 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-thanos-prometheus-http-client-file\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.040091 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-0\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.040375 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.040441 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/153f5373-4b00-4d8f-9817-86f0819f1146-config-out\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.040496 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-1\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.040876 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.041259 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-tls-assets\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.041357 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwwm4\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-kube-api-access-bwwm4\") pod \"153f5373-4b00-4d8f-9817-86f0819f1146\" (UID: \"153f5373-4b00-4d8f-9817-86f0819f1146\") " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.042567 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.042797 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.042900 4903 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.042951 4903 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.045410 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-config" (OuterVolumeSpecName: "config") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.045460 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.045635 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/153f5373-4b00-4d8f-9817-86f0819f1146-config-out" (OuterVolumeSpecName: "config-out") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.046351 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-kube-api-access-bwwm4" (OuterVolumeSpecName: "kube-api-access-bwwm4") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "kube-api-access-bwwm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.046970 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.070211 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "pvc-acee0157-8432-426e-88b4-e17ebec1928d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.071566 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-web-config" (OuterVolumeSpecName: "web-config") pod "153f5373-4b00-4d8f-9817-86f0819f1146" (UID: "153f5373-4b00-4d8f-9817-86f0819f1146"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.147969 4903 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-web-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.148026 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.148040 4903 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/153f5373-4b00-4d8f-9817-86f0819f1146-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.148061 4903 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/153f5373-4b00-4d8f-9817-86f0819f1146-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.148101 4903 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") on node \"crc\" " Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.148123 4903 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/153f5373-4b00-4d8f-9817-86f0819f1146-config-out\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.148139 4903 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.148153 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwwm4\" (UniqueName: \"kubernetes.io/projected/153f5373-4b00-4d8f-9817-86f0819f1146-kube-api-access-bwwm4\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.195841 4903 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.196123 4903 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-acee0157-8432-426e-88b4-e17ebec1928d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d") on node "crc" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.250715 4903 reconciler_common.go:293] "Volume detached for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.345521 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"153f5373-4b00-4d8f-9817-86f0819f1146","Type":"ContainerDied","Data":"1490cf69d5b1f3edb19c7528b7d6fb794ff5774df1247d702d7351229f236795"} Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.345630 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.345920 4903 scope.go:117] "RemoveContainer" containerID="e56c47501f5ec3afad7841aaafff5c518a4a5af49166e41c44e47f6f24b3035d" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.349433 4903 generic.go:334] "Generic (PLEG): container finished" podID="3af3e35a-a105-4812-9f41-c49343319188" containerID="dfdd4ee0a64e2f12c19cef7560daf8f22096487bad6e9bb5efa21a49d32923b2" exitCode=0 Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.349633 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a817-account-create-update-ztcqh" event={"ID":"3af3e35a-a105-4812-9f41-c49343319188","Type":"ContainerDied","Data":"dfdd4ee0a64e2f12c19cef7560daf8f22096487bad6e9bb5efa21a49d32923b2"} Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.349747 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a817-account-create-update-ztcqh" event={"ID":"3af3e35a-a105-4812-9f41-c49343319188","Type":"ContainerStarted","Data":"3f98ae5cc25f1b7edcef62245190776d8a3b268b224dbb5886e0006a4fed5206"} Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.351931 4903 generic.go:334] "Generic (PLEG): container finished" podID="a4658f79-2284-4761-b715-0e0af88f2439" containerID="8f363c2379a3abd90a0379ed8a346e41b12df7d3790a0e192c7a5cb1c13dc5d7" exitCode=0 Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.351989 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-pzbhp" event={"ID":"a4658f79-2284-4761-b715-0e0af88f2439","Type":"ContainerDied","Data":"8f363c2379a3abd90a0379ed8a346e41b12df7d3790a0e192c7a5cb1c13dc5d7"} Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.352130 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-pzbhp" event={"ID":"a4658f79-2284-4761-b715-0e0af88f2439","Type":"ContainerStarted","Data":"097f5dcfe713349768e01b2ce350626f721ffb1524fcf37b132dac70a43668b6"} Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.352234 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.381764 4903 scope.go:117] "RemoveContainer" containerID="63386b5e05e4c02f639a16bad5d42411a9af34bc6671f8426c737db5a5ca1608" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.401899 4903 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="23ed4382-5fbe-42d4-8eca-139726609cdf" podUID="8f1dc9cc-7637-42cf-a9b0-1b8d141a1534" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.408539 4903 scope.go:117] "RemoveContainer" containerID="84de4d0b5526fa3374f17f173d51fd36570c4b7c9d2258f127c31e8c0df0510c" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.413735 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.441973 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ed4382-5fbe-42d4-8eca-139726609cdf" path="/var/lib/kubelet/pods/23ed4382-5fbe-42d4-8eca-139726609cdf/volumes" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.442566 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.442595 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:34:14 crc kubenswrapper[4903]: E0128 17:34:14.442876 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="init-config-reloader" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.442887 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="init-config-reloader" Jan 28 17:34:14 crc kubenswrapper[4903]: E0128 17:34:14.442902 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="prometheus" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.442909 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="prometheus" Jan 28 17:34:14 crc kubenswrapper[4903]: E0128 17:34:14.442939 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="config-reloader" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.442945 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="config-reloader" Jan 28 17:34:14 crc kubenswrapper[4903]: E0128 17:34:14.442967 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="thanos-sidecar" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.442972 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="thanos-sidecar" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.443146 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="prometheus" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.443161 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="thanos-sidecar" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.443174 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" containerName="config-reloader" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.446541 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.450762 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.450912 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.450991 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.451098 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.451114 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.457130 4903 scope.go:117] "RemoveContainer" containerID="854d3fc978d3df08e68795789de7dfb52aefaf72dab9d36a1fca063a0b21dc84" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.466928 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.470235 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.470371 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.470761 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-6dkjl" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.471636 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562179 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb3c9671-9ec5-4516-ba80-85f085f39c57-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562380 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562460 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562483 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562576 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562662 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562714 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562764 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562824 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562843 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb3c9671-9ec5-4516-ba80-85f085f39c57-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562858 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s78bz\" (UniqueName: \"kubernetes.io/projected/cb3c9671-9ec5-4516-ba80-85f085f39c57-kube-api-access-s78bz\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562900 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.562941 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.664961 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665034 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665101 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb3c9671-9ec5-4516-ba80-85f085f39c57-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665146 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665177 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665196 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665223 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665259 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665301 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665331 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665364 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665385 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb3c9671-9ec5-4516-ba80-85f085f39c57-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.665401 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s78bz\" (UniqueName: \"kubernetes.io/projected/cb3c9671-9ec5-4516-ba80-85f085f39c57-kube-api-access-s78bz\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.666734 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.666791 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.667446 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cb3c9671-9ec5-4516-ba80-85f085f39c57-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.671467 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.671492 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.671978 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.672002 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.672822 4903 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.672946 4903 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/058d63b9304d69aa417f3799d57c2be739525036846bc0f19bc7858ff004b3fe/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.673063 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.673284 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb3c9671-9ec5-4516-ba80-85f085f39c57-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.674786 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3c9671-9ec5-4516-ba80-85f085f39c57-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.676863 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb3c9671-9ec5-4516-ba80-85f085f39c57-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.686634 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s78bz\" (UniqueName: \"kubernetes.io/projected/cb3c9671-9ec5-4516-ba80-85f085f39c57-kube-api-access-s78bz\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.733306 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-acee0157-8432-426e-88b4-e17ebec1928d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-acee0157-8432-426e-88b4-e17ebec1928d\") pod \"prometheus-metric-storage-0\" (UID: \"cb3c9671-9ec5-4516-ba80-85f085f39c57\") " pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:14 crc kubenswrapper[4903]: I0128 17:34:14.783225 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:15 crc kubenswrapper[4903]: W0128 17:34:15.272226 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb3c9671_9ec5_4516_ba80_85f085f39c57.slice/crio-7030cd5e34f5315471b82510a1e9e683c82277caa5687ccc08e70bcbdc44fcf0 WatchSource:0}: Error finding container 7030cd5e34f5315471b82510a1e9e683c82277caa5687ccc08e70bcbdc44fcf0: Status 404 returned error can't find the container with id 7030cd5e34f5315471b82510a1e9e683c82277caa5687ccc08e70bcbdc44fcf0 Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.273858 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.366593 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb3c9671-9ec5-4516-ba80-85f085f39c57","Type":"ContainerStarted","Data":"7030cd5e34f5315471b82510a1e9e683c82277caa5687ccc08e70bcbdc44fcf0"} Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.896035 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.908077 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.993865 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h29fp\" (UniqueName: \"kubernetes.io/projected/3af3e35a-a105-4812-9f41-c49343319188-kube-api-access-h29fp\") pod \"3af3e35a-a105-4812-9f41-c49343319188\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.994045 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nww6\" (UniqueName: \"kubernetes.io/projected/a4658f79-2284-4761-b715-0e0af88f2439-kube-api-access-5nww6\") pod \"a4658f79-2284-4761-b715-0e0af88f2439\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.994092 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af3e35a-a105-4812-9f41-c49343319188-operator-scripts\") pod \"3af3e35a-a105-4812-9f41-c49343319188\" (UID: \"3af3e35a-a105-4812-9f41-c49343319188\") " Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.995185 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af3e35a-a105-4812-9f41-c49343319188-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3af3e35a-a105-4812-9f41-c49343319188" (UID: "3af3e35a-a105-4812-9f41-c49343319188"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.995376 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4658f79-2284-4761-b715-0e0af88f2439-operator-scripts\") pod \"a4658f79-2284-4761-b715-0e0af88f2439\" (UID: \"a4658f79-2284-4761-b715-0e0af88f2439\") " Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.996127 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4658f79-2284-4761-b715-0e0af88f2439-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4658f79-2284-4761-b715-0e0af88f2439" (UID: "a4658f79-2284-4761-b715-0e0af88f2439"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.997060 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af3e35a-a105-4812-9f41-c49343319188-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.997083 4903 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4658f79-2284-4761-b715-0e0af88f2439-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:15 crc kubenswrapper[4903]: I0128 17:34:15.998645 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af3e35a-a105-4812-9f41-c49343319188-kube-api-access-h29fp" (OuterVolumeSpecName: "kube-api-access-h29fp") pod "3af3e35a-a105-4812-9f41-c49343319188" (UID: "3af3e35a-a105-4812-9f41-c49343319188"). InnerVolumeSpecName "kube-api-access-h29fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.001515 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4658f79-2284-4761-b715-0e0af88f2439-kube-api-access-5nww6" (OuterVolumeSpecName: "kube-api-access-5nww6") pod "a4658f79-2284-4761-b715-0e0af88f2439" (UID: "a4658f79-2284-4761-b715-0e0af88f2439"). InnerVolumeSpecName "kube-api-access-5nww6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.098829 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h29fp\" (UniqueName: \"kubernetes.io/projected/3af3e35a-a105-4812-9f41-c49343319188-kube-api-access-h29fp\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.098858 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nww6\" (UniqueName: \"kubernetes.io/projected/a4658f79-2284-4761-b715-0e0af88f2439-kube-api-access-5nww6\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.379232 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-pzbhp" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.379250 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-pzbhp" event={"ID":"a4658f79-2284-4761-b715-0e0af88f2439","Type":"ContainerDied","Data":"097f5dcfe713349768e01b2ce350626f721ffb1524fcf37b132dac70a43668b6"} Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.379291 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097f5dcfe713349768e01b2ce350626f721ffb1524fcf37b132dac70a43668b6" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.380822 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a817-account-create-update-ztcqh" event={"ID":"3af3e35a-a105-4812-9f41-c49343319188","Type":"ContainerDied","Data":"3f98ae5cc25f1b7edcef62245190776d8a3b268b224dbb5886e0006a4fed5206"} Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.380877 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f98ae5cc25f1b7edcef62245190776d8a3b268b224dbb5886e0006a4fed5206" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.380889 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a817-account-create-update-ztcqh" Jan 28 17:34:16 crc kubenswrapper[4903]: I0128 17:34:16.424900 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153f5373-4b00-4d8f-9817-86f0819f1146" path="/var/lib/kubelet/pods/153f5373-4b00-4d8f-9817-86f0819f1146/volumes" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.933570 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-chqvt"] Jan 28 17:34:17 crc kubenswrapper[4903]: E0128 17:34:17.934927 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4658f79-2284-4761-b715-0e0af88f2439" containerName="mariadb-database-create" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.934957 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4658f79-2284-4761-b715-0e0af88f2439" containerName="mariadb-database-create" Jan 28 17:34:17 crc kubenswrapper[4903]: E0128 17:34:17.934971 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af3e35a-a105-4812-9f41-c49343319188" containerName="mariadb-account-create-update" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.934980 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af3e35a-a105-4812-9f41-c49343319188" containerName="mariadb-account-create-update" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.935231 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4658f79-2284-4761-b715-0e0af88f2439" containerName="mariadb-database-create" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.935269 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af3e35a-a105-4812-9f41-c49343319188" containerName="mariadb-account-create-update" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.936269 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.942157 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.942423 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-mt4tm" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.942609 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.942813 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 17:34:17 crc kubenswrapper[4903]: I0128 17:34:17.949592 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-chqvt"] Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.045808 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-combined-ca-bundle\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.046141 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-scripts\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.046328 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-config-data\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.046407 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gdvx\" (UniqueName: \"kubernetes.io/projected/1fb25902-814a-41c3-b37d-827e3f4e2e93-kube-api-access-7gdvx\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.149108 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-config-data\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.149415 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gdvx\" (UniqueName: \"kubernetes.io/projected/1fb25902-814a-41c3-b37d-827e3f4e2e93-kube-api-access-7gdvx\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.149989 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-combined-ca-bundle\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.150402 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-scripts\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.155050 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-scripts\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.155712 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-combined-ca-bundle\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.157105 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-config-data\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.173234 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gdvx\" (UniqueName: \"kubernetes.io/projected/1fb25902-814a-41c3-b37d-827e3f4e2e93-kube-api-access-7gdvx\") pod \"aodh-db-sync-chqvt\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.264484 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:18 crc kubenswrapper[4903]: I0128 17:34:18.737448 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-chqvt"] Jan 28 17:34:18 crc kubenswrapper[4903]: W0128 17:34:18.738824 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fb25902_814a_41c3_b37d_827e3f4e2e93.slice/crio-d1ed163452e1f156647b306c0fa7373aa1627aaffbf76dfeb8950a9cea2c99eb WatchSource:0}: Error finding container d1ed163452e1f156647b306c0fa7373aa1627aaffbf76dfeb8950a9cea2c99eb: Status 404 returned error can't find the container with id d1ed163452e1f156647b306c0fa7373aa1627aaffbf76dfeb8950a9cea2c99eb Jan 28 17:34:19 crc kubenswrapper[4903]: I0128 17:34:19.435122 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-chqvt" event={"ID":"1fb25902-814a-41c3-b37d-827e3f4e2e93","Type":"ContainerStarted","Data":"d1ed163452e1f156647b306c0fa7373aa1627aaffbf76dfeb8950a9cea2c99eb"} Jan 28 17:34:19 crc kubenswrapper[4903]: I0128 17:34:19.441759 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb3c9671-9ec5-4516-ba80-85f085f39c57","Type":"ContainerStarted","Data":"c61c01f1fae338f7f723d0993b8ddde775c8bce1946e12c0913b71c8c14d1d1c"} Jan 28 17:34:24 crc kubenswrapper[4903]: I0128 17:34:24.067642 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-22tmw"] Jan 28 17:34:24 crc kubenswrapper[4903]: I0128 17:34:24.078240 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-22tmw"] Jan 28 17:34:24 crc kubenswrapper[4903]: I0128 17:34:24.429948 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0f0f8b-1f17-443b-97b2-c32776d01176" path="/var/lib/kubelet/pods/8d0f0f8b-1f17-443b-97b2-c32776d01176/volumes" Jan 28 17:34:24 crc kubenswrapper[4903]: I0128 17:34:24.496688 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-chqvt" event={"ID":"1fb25902-814a-41c3-b37d-827e3f4e2e93","Type":"ContainerStarted","Data":"e66e2213492e8778a381661496e6aa4d3f2b04373813ae42b34899ae580175ee"} Jan 28 17:34:24 crc kubenswrapper[4903]: I0128 17:34:24.520542 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-chqvt" podStartSLOduration=2.8123793519999998 podStartE2EDuration="7.520508546s" podCreationTimestamp="2026-01-28 17:34:17 +0000 UTC" firstStartedPulling="2026-01-28 17:34:18.741265744 +0000 UTC m=+6531.017237255" lastFinishedPulling="2026-01-28 17:34:23.449394938 +0000 UTC m=+6535.725366449" observedRunningTime="2026-01-28 17:34:24.513067017 +0000 UTC m=+6536.789038528" watchObservedRunningTime="2026-01-28 17:34:24.520508546 +0000 UTC m=+6536.796480057" Jan 28 17:34:25 crc kubenswrapper[4903]: I0128 17:34:25.515166 4903 generic.go:334] "Generic (PLEG): container finished" podID="cb3c9671-9ec5-4516-ba80-85f085f39c57" containerID="c61c01f1fae338f7f723d0993b8ddde775c8bce1946e12c0913b71c8c14d1d1c" exitCode=0 Jan 28 17:34:25 crc kubenswrapper[4903]: I0128 17:34:25.515265 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb3c9671-9ec5-4516-ba80-85f085f39c57","Type":"ContainerDied","Data":"c61c01f1fae338f7f723d0993b8ddde775c8bce1946e12c0913b71c8c14d1d1c"} Jan 28 17:34:26 crc kubenswrapper[4903]: I0128 17:34:26.526406 4903 generic.go:334] "Generic (PLEG): container finished" podID="1fb25902-814a-41c3-b37d-827e3f4e2e93" containerID="e66e2213492e8778a381661496e6aa4d3f2b04373813ae42b34899ae580175ee" exitCode=0 Jan 28 17:34:26 crc kubenswrapper[4903]: I0128 17:34:26.526560 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-chqvt" event={"ID":"1fb25902-814a-41c3-b37d-827e3f4e2e93","Type":"ContainerDied","Data":"e66e2213492e8778a381661496e6aa4d3f2b04373813ae42b34899ae580175ee"} Jan 28 17:34:26 crc kubenswrapper[4903]: I0128 17:34:26.529808 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb3c9671-9ec5-4516-ba80-85f085f39c57","Type":"ContainerStarted","Data":"3568544d7440ed5fa9f471a9c93fda5f0f6059af55706530c7a946fbd57ec09c"} Jan 28 17:34:27 crc kubenswrapper[4903]: I0128 17:34:27.414366 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:34:27 crc kubenswrapper[4903]: E0128 17:34:27.414736 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.206182 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.320250 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-combined-ca-bundle\") pod \"1fb25902-814a-41c3-b37d-827e3f4e2e93\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.320310 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-scripts\") pod \"1fb25902-814a-41c3-b37d-827e3f4e2e93\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.320358 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gdvx\" (UniqueName: \"kubernetes.io/projected/1fb25902-814a-41c3-b37d-827e3f4e2e93-kube-api-access-7gdvx\") pod \"1fb25902-814a-41c3-b37d-827e3f4e2e93\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.320382 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-config-data\") pod \"1fb25902-814a-41c3-b37d-827e3f4e2e93\" (UID: \"1fb25902-814a-41c3-b37d-827e3f4e2e93\") " Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.479598 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-scripts" (OuterVolumeSpecName: "scripts") pod "1fb25902-814a-41c3-b37d-827e3f4e2e93" (UID: "1fb25902-814a-41c3-b37d-827e3f4e2e93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.480719 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb25902-814a-41c3-b37d-827e3f4e2e93-kube-api-access-7gdvx" (OuterVolumeSpecName: "kube-api-access-7gdvx") pod "1fb25902-814a-41c3-b37d-827e3f4e2e93" (UID: "1fb25902-814a-41c3-b37d-827e3f4e2e93"). InnerVolumeSpecName "kube-api-access-7gdvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.526209 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.526250 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gdvx\" (UniqueName: \"kubernetes.io/projected/1fb25902-814a-41c3-b37d-827e3f4e2e93-kube-api-access-7gdvx\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.549852 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-chqvt" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.579804 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fb25902-814a-41c3-b37d-827e3f4e2e93" (UID: "1fb25902-814a-41c3-b37d-827e3f4e2e93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.583736 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-chqvt" event={"ID":"1fb25902-814a-41c3-b37d-827e3f4e2e93","Type":"ContainerDied","Data":"d1ed163452e1f156647b306c0fa7373aa1627aaffbf76dfeb8950a9cea2c99eb"} Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.583774 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1ed163452e1f156647b306c0fa7373aa1627aaffbf76dfeb8950a9cea2c99eb" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.610273 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-config-data" (OuterVolumeSpecName: "config-data") pod "1fb25902-814a-41c3-b37d-827e3f4e2e93" (UID: "1fb25902-814a-41c3-b37d-827e3f4e2e93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.630706 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:28 crc kubenswrapper[4903]: I0128 17:34:28.630756 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb25902-814a-41c3-b37d-827e3f4e2e93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:29 crc kubenswrapper[4903]: I0128 17:34:29.564550 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb3c9671-9ec5-4516-ba80-85f085f39c57","Type":"ContainerStarted","Data":"32f12a498ed8d5ab056997d462706586d71c229ec7a0448fc1ef229a6ed237a7"} Jan 28 17:34:30 crc kubenswrapper[4903]: I0128 17:34:30.576109 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb3c9671-9ec5-4516-ba80-85f085f39c57","Type":"ContainerStarted","Data":"ce64eed746d06f60a463c8b7753016285e658009d8f3b56734194f4493b194e6"} Jan 28 17:34:30 crc kubenswrapper[4903]: I0128 17:34:30.603238 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.603219224 podStartE2EDuration="16.603219224s" podCreationTimestamp="2026-01-28 17:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:34:30.59933235 +0000 UTC m=+6542.875303871" watchObservedRunningTime="2026-01-28 17:34:30.603219224 +0000 UTC m=+6542.879190735" Jan 28 17:34:31 crc kubenswrapper[4903]: I0128 17:34:31.339888 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.583411 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 17:34:32 crc kubenswrapper[4903]: E0128 17:34:32.584033 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb25902-814a-41c3-b37d-827e3f4e2e93" containerName="aodh-db-sync" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.584052 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb25902-814a-41c3-b37d-827e3f4e2e93" containerName="aodh-db-sync" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.584311 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb25902-814a-41c3-b37d-827e3f4e2e93" containerName="aodh-db-sync" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.586680 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.590369 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.590621 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-mt4tm" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.592192 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.611416 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.732449 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7hq2\" (UniqueName: \"kubernetes.io/projected/b9ebfc22-103b-43ca-849b-583ab7800d10-kube-api-access-f7hq2\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.732515 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.732631 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-scripts\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.732751 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-config-data\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.834662 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7hq2\" (UniqueName: \"kubernetes.io/projected/b9ebfc22-103b-43ca-849b-583ab7800d10-kube-api-access-f7hq2\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.835020 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.835084 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-scripts\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.835153 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-config-data\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.842332 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-config-data\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.845089 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.853546 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7hq2\" (UniqueName: \"kubernetes.io/projected/b9ebfc22-103b-43ca-849b-583ab7800d10-kube-api-access-f7hq2\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.858772 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-scripts\") pod \"aodh-0\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " pod="openstack/aodh-0" Jan 28 17:34:32 crc kubenswrapper[4903]: I0128 17:34:32.937284 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 17:34:33 crc kubenswrapper[4903]: I0128 17:34:33.442283 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 17:34:33 crc kubenswrapper[4903]: I0128 17:34:33.614595 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerStarted","Data":"be003f4ccf2b1e2b620c1b17f3c33e7c7175972b478ef19ffb52196ee8e0447d"} Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.044786 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.116512 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.117139 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-central-agent" containerID="cri-o://c5ce3db2955609bba206be3d4472b6f62ffc527dd0668841a659fa7a723f23d9" gracePeriod=30 Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.118167 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="proxy-httpd" containerID="cri-o://d05d17385154a9a5e9213a1eb50e367e9596883138cc195e81fc010f6b1331a3" gracePeriod=30 Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.118232 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="sg-core" containerID="cri-o://a4463a829411f0bfb79101d27c0de91085475c8654cbc6cd53bba2edc1eb0e0b" gracePeriod=30 Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.118276 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-notification-agent" containerID="cri-o://5805cc99a9b8631a622cccbdddad9c20f241d9a0123967342a7d9a31080976d3" gracePeriod=30 Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.639866 4903 generic.go:334] "Generic (PLEG): container finished" podID="4a35b759-1510-4949-82eb-5a492d973fa7" containerID="d05d17385154a9a5e9213a1eb50e367e9596883138cc195e81fc010f6b1331a3" exitCode=0 Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.640238 4903 generic.go:334] "Generic (PLEG): container finished" podID="4a35b759-1510-4949-82eb-5a492d973fa7" containerID="a4463a829411f0bfb79101d27c0de91085475c8654cbc6cd53bba2edc1eb0e0b" exitCode=2 Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.640252 4903 generic.go:334] "Generic (PLEG): container finished" podID="4a35b759-1510-4949-82eb-5a492d973fa7" containerID="c5ce3db2955609bba206be3d4472b6f62ffc527dd0668841a659fa7a723f23d9" exitCode=0 Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.639925 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerDied","Data":"d05d17385154a9a5e9213a1eb50e367e9596883138cc195e81fc010f6b1331a3"} Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.640352 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerDied","Data":"a4463a829411f0bfb79101d27c0de91085475c8654cbc6cd53bba2edc1eb0e0b"} Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.640388 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerDied","Data":"c5ce3db2955609bba206be3d4472b6f62ffc527dd0668841a659fa7a723f23d9"} Jan 28 17:34:35 crc kubenswrapper[4903]: I0128 17:34:35.642258 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerStarted","Data":"89bcde0676bd5d435916e30ef4e4d0617e1ea7c561a87a9d9929b05483a2c003"} Jan 28 17:34:36 crc kubenswrapper[4903]: I0128 17:34:36.374628 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 17:34:36 crc kubenswrapper[4903]: I0128 17:34:36.675910 4903 generic.go:334] "Generic (PLEG): container finished" podID="4a35b759-1510-4949-82eb-5a492d973fa7" containerID="5805cc99a9b8631a622cccbdddad9c20f241d9a0123967342a7d9a31080976d3" exitCode=0 Jan 28 17:34:36 crc kubenswrapper[4903]: I0128 17:34:36.675962 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerDied","Data":"5805cc99a9b8631a622cccbdddad9c20f241d9a0123967342a7d9a31080976d3"} Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.360268 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.510654 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-config-data\") pod \"4a35b759-1510-4949-82eb-5a492d973fa7\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.512749 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-sg-core-conf-yaml\") pod \"4a35b759-1510-4949-82eb-5a492d973fa7\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.513073 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-scripts\") pod \"4a35b759-1510-4949-82eb-5a492d973fa7\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.513321 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-run-httpd\") pod \"4a35b759-1510-4949-82eb-5a492d973fa7\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.513931 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4a35b759-1510-4949-82eb-5a492d973fa7" (UID: "4a35b759-1510-4949-82eb-5a492d973fa7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.517415 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-log-httpd\") pod \"4a35b759-1510-4949-82eb-5a492d973fa7\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.517564 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th7gk\" (UniqueName: \"kubernetes.io/projected/4a35b759-1510-4949-82eb-5a492d973fa7-kube-api-access-th7gk\") pod \"4a35b759-1510-4949-82eb-5a492d973fa7\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.517695 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-combined-ca-bundle\") pod \"4a35b759-1510-4949-82eb-5a492d973fa7\" (UID: \"4a35b759-1510-4949-82eb-5a492d973fa7\") " Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.517930 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4a35b759-1510-4949-82eb-5a492d973fa7" (UID: "4a35b759-1510-4949-82eb-5a492d973fa7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.518711 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.518733 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a35b759-1510-4949-82eb-5a492d973fa7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.521826 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-scripts" (OuterVolumeSpecName: "scripts") pod "4a35b759-1510-4949-82eb-5a492d973fa7" (UID: "4a35b759-1510-4949-82eb-5a492d973fa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.521904 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a35b759-1510-4949-82eb-5a492d973fa7-kube-api-access-th7gk" (OuterVolumeSpecName: "kube-api-access-th7gk") pod "4a35b759-1510-4949-82eb-5a492d973fa7" (UID: "4a35b759-1510-4949-82eb-5a492d973fa7"). InnerVolumeSpecName "kube-api-access-th7gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.575720 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4a35b759-1510-4949-82eb-5a492d973fa7" (UID: "4a35b759-1510-4949-82eb-5a492d973fa7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.621084 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.621124 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.621138 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th7gk\" (UniqueName: \"kubernetes.io/projected/4a35b759-1510-4949-82eb-5a492d973fa7-kube-api-access-th7gk\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.631411 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a35b759-1510-4949-82eb-5a492d973fa7" (UID: "4a35b759-1510-4949-82eb-5a492d973fa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.692422 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a35b759-1510-4949-82eb-5a492d973fa7","Type":"ContainerDied","Data":"0bdbe9ba84a46560e596c3bf83e8d33ba398250a7ff23e659ca44e0101e0bc55"} Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.692468 4903 scope.go:117] "RemoveContainer" containerID="d05d17385154a9a5e9213a1eb50e367e9596883138cc195e81fc010f6b1331a3" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.692626 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.698152 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-config-data" (OuterVolumeSpecName: "config-data") pod "4a35b759-1510-4949-82eb-5a492d973fa7" (UID: "4a35b759-1510-4949-82eb-5a492d973fa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.722115 4903 scope.go:117] "RemoveContainer" containerID="a4463a829411f0bfb79101d27c0de91085475c8654cbc6cd53bba2edc1eb0e0b" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.723276 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.723317 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a35b759-1510-4949-82eb-5a492d973fa7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.753845 4903 scope.go:117] "RemoveContainer" containerID="5805cc99a9b8631a622cccbdddad9c20f241d9a0123967342a7d9a31080976d3" Jan 28 17:34:37 crc kubenswrapper[4903]: I0128 17:34:37.781267 4903 scope.go:117] "RemoveContainer" containerID="c5ce3db2955609bba206be3d4472b6f62ffc527dd0668841a659fa7a723f23d9" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.033275 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.045350 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.065987 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:38 crc kubenswrapper[4903]: E0128 17:34:38.066505 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-central-agent" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066545 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-central-agent" Jan 28 17:34:38 crc kubenswrapper[4903]: E0128 17:34:38.066560 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-notification-agent" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066569 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-notification-agent" Jan 28 17:34:38 crc kubenswrapper[4903]: E0128 17:34:38.066586 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="proxy-httpd" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066595 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="proxy-httpd" Jan 28 17:34:38 crc kubenswrapper[4903]: E0128 17:34:38.066608 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="sg-core" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066615 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="sg-core" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066849 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-notification-agent" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066876 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="sg-core" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066914 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="ceilometer-central-agent" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.066934 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" containerName="proxy-httpd" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.069378 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.075554 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.076955 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.087700 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.235346 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-scripts\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.235402 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-config-data\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.235504 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.235581 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.235601 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p57l\" (UniqueName: \"kubernetes.io/projected/b941052e-c9c0-4005-8a1e-a30d21da0dbc-kube-api-access-7p57l\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.235763 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.235824 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.337997 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-scripts\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.338056 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-config-data\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.338132 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.338213 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.338241 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p57l\" (UniqueName: \"kubernetes.io/projected/b941052e-c9c0-4005-8a1e-a30d21da0dbc-kube-api-access-7p57l\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.338308 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.338336 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.339189 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.339194 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.343311 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.343565 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-scripts\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.343775 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-config-data\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.347237 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.358483 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p57l\" (UniqueName: \"kubernetes.io/projected/b941052e-c9c0-4005-8a1e-a30d21da0dbc-kube-api-access-7p57l\") pod \"ceilometer-0\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.386876 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.432295 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a35b759-1510-4949-82eb-5a492d973fa7" path="/var/lib/kubelet/pods/4a35b759-1510-4949-82eb-5a492d973fa7/volumes" Jan 28 17:34:38 crc kubenswrapper[4903]: I0128 17:34:38.774808 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerStarted","Data":"ce138d823c04992b3a56c6ab756c7b78ab2d1575d445894a611de2a5a3d3753a"} Jan 28 17:34:39 crc kubenswrapper[4903]: I0128 17:34:39.013574 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:39 crc kubenswrapper[4903]: I0128 17:34:39.071629 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9wdl4"] Jan 28 17:34:39 crc kubenswrapper[4903]: I0128 17:34:39.093129 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9wdl4"] Jan 28 17:34:39 crc kubenswrapper[4903]: I0128 17:34:39.790240 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerStarted","Data":"2133d6271d65076726aac352bffd86c8fc900fcee3b67393bcf3dfcce4849ed7"} Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.057606 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.080602 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-n4xkj"] Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.098492 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-n4xkj"] Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.110338 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.110675 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc" containerName="kube-state-metrics" containerID="cri-o://0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae" gracePeriod=30 Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.433879 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393ab6f9-40fb-4c36-a6c9-a2bff0096e9a" path="/var/lib/kubelet/pods/393ab6f9-40fb-4c36-a6c9-a2bff0096e9a/volumes" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.435269 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ea63b1-7931-4420-89b7-a6577ca2076f" path="/var/lib/kubelet/pods/96ea63b1-7931-4420-89b7-a6577ca2076f/volumes" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.689705 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.797449 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xpkf\" (UniqueName: \"kubernetes.io/projected/7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc-kube-api-access-5xpkf\") pod \"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc\" (UID: \"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc\") " Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.804136 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc-kube-api-access-5xpkf" (OuterVolumeSpecName: "kube-api-access-5xpkf") pod "7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc" (UID: "7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc"). InnerVolumeSpecName "kube-api-access-5xpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.820193 4903 generic.go:334] "Generic (PLEG): container finished" podID="7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc" containerID="0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae" exitCode=2 Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.820263 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc","Type":"ContainerDied","Data":"0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae"} Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.820290 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc","Type":"ContainerDied","Data":"444c3f55dfcf6947f577dcf5fb1bbc4b4477ce95ff61e3a4181212affae72d41"} Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.820307 4903 scope.go:117] "RemoveContainer" containerID="0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.820470 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.836563 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerStarted","Data":"2d79d899bc9ef3057ddb624bdf2d4bc4eb7867e6506fe6b9d2d72a665f658426"} Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.838657 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerStarted","Data":"d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83"} Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.867689 4903 scope.go:117] "RemoveContainer" containerID="0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae" Jan 28 17:34:40 crc kubenswrapper[4903]: E0128 17:34:40.870843 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae\": container with ID starting with 0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae not found: ID does not exist" containerID="0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.870901 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae"} err="failed to get container status \"0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae\": rpc error: code = NotFound desc = could not find container \"0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae\": container with ID starting with 0be1db9a85b7d1b6674bc5ac5265dde2aa7d12ba16c98862092599bf583d54ae not found: ID does not exist" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.878052 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.888920 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.900897 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xpkf\" (UniqueName: \"kubernetes.io/projected/7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc-kube-api-access-5xpkf\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.904814 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:34:40 crc kubenswrapper[4903]: E0128 17:34:40.905276 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc" containerName="kube-state-metrics" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.905296 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc" containerName="kube-state-metrics" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.905505 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc" containerName="kube-state-metrics" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.906274 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.912298 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.912405 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 17:34:40 crc kubenswrapper[4903]: I0128 17:34:40.917084 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.003331 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.003411 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.003585 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.004320 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9fn\" (UniqueName: \"kubernetes.io/projected/5f62a38f-f31a-498f-9183-7149bbadab84-kube-api-access-tr9fn\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.106240 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.106283 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.106341 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.106437 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9fn\" (UniqueName: \"kubernetes.io/projected/5f62a38f-f31a-498f-9183-7149bbadab84-kube-api-access-tr9fn\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.112438 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.114702 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.115830 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5f62a38f-f31a-498f-9183-7149bbadab84-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.126080 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9fn\" (UniqueName: \"kubernetes.io/projected/5f62a38f-f31a-498f-9183-7149bbadab84-kube-api-access-tr9fn\") pod \"kube-state-metrics-0\" (UID: \"5f62a38f-f31a-498f-9183-7149bbadab84\") " pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.224117 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.767447 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 17:34:41 crc kubenswrapper[4903]: W0128 17:34:41.772853 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f62a38f_f31a_498f_9183_7149bbadab84.slice/crio-c7e320c02f82f9c3950b9047ad6fecde0ca8eacae2a1de97cf32d763943c341e WatchSource:0}: Error finding container c7e320c02f82f9c3950b9047ad6fecde0ca8eacae2a1de97cf32d763943c341e: Status 404 returned error can't find the container with id c7e320c02f82f9c3950b9047ad6fecde0ca8eacae2a1de97cf32d763943c341e Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.862803 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerStarted","Data":"95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af"} Jan 28 17:34:41 crc kubenswrapper[4903]: I0128 17:34:41.865268 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5f62a38f-f31a-498f-9183-7149bbadab84","Type":"ContainerStarted","Data":"c7e320c02f82f9c3950b9047ad6fecde0ca8eacae2a1de97cf32d763943c341e"} Jan 28 17:34:42 crc kubenswrapper[4903]: I0128 17:34:42.415365 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:34:42 crc kubenswrapper[4903]: E0128 17:34:42.416209 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:34:42 crc kubenswrapper[4903]: I0128 17:34:42.435036 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc" path="/var/lib/kubelet/pods/7db0ca32-bf70-434e-bcf2-c5ffb8ca14bc/volumes" Jan 28 17:34:42 crc kubenswrapper[4903]: I0128 17:34:42.879044 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerStarted","Data":"37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d"} Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.889986 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerStarted","Data":"53de8df146305aa905cd51af1ecb7da51ae25a9b6ea6a7a9dcc5f745407960c0"} Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.890152 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-notifier" containerID="cri-o://2d79d899bc9ef3057ddb624bdf2d4bc4eb7867e6506fe6b9d2d72a665f658426" gracePeriod=30 Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.890154 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-api" containerID="cri-o://89bcde0676bd5d435916e30ef4e4d0617e1ea7c561a87a9d9929b05483a2c003" gracePeriod=30 Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.890249 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-listener" containerID="cri-o://53de8df146305aa905cd51af1ecb7da51ae25a9b6ea6a7a9dcc5f745407960c0" gracePeriod=30 Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.890253 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-evaluator" containerID="cri-o://ce138d823c04992b3a56c6ab756c7b78ab2d1575d445894a611de2a5a3d3753a" gracePeriod=30 Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.892346 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5f62a38f-f31a-498f-9183-7149bbadab84","Type":"ContainerStarted","Data":"363ce218c250d159e710ead93400cc534e9847b70213c4b936fd998d38464de3"} Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.893556 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.912368 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.045870715 podStartE2EDuration="11.912348695s" podCreationTimestamp="2026-01-28 17:34:32 +0000 UTC" firstStartedPulling="2026-01-28 17:34:33.436344422 +0000 UTC m=+6545.712315933" lastFinishedPulling="2026-01-28 17:34:42.302822402 +0000 UTC m=+6554.578793913" observedRunningTime="2026-01-28 17:34:43.910423653 +0000 UTC m=+6556.186395184" watchObservedRunningTime="2026-01-28 17:34:43.912348695 +0000 UTC m=+6556.188320206" Jan 28 17:34:43 crc kubenswrapper[4903]: I0128 17:34:43.951664 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.5505025999999997 podStartE2EDuration="3.951647123s" podCreationTimestamp="2026-01-28 17:34:40 +0000 UTC" firstStartedPulling="2026-01-28 17:34:41.775275867 +0000 UTC m=+6554.051247378" lastFinishedPulling="2026-01-28 17:34:42.17642039 +0000 UTC m=+6554.452391901" observedRunningTime="2026-01-28 17:34:43.94444686 +0000 UTC m=+6556.220418381" watchObservedRunningTime="2026-01-28 17:34:43.951647123 +0000 UTC m=+6556.227618634" Jan 28 17:34:44 crc kubenswrapper[4903]: I0128 17:34:44.784738 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:44 crc kubenswrapper[4903]: I0128 17:34:44.793943 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:44 crc kubenswrapper[4903]: I0128 17:34:44.908002 4903 generic.go:334] "Generic (PLEG): container finished" podID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerID="ce138d823c04992b3a56c6ab756c7b78ab2d1575d445894a611de2a5a3d3753a" exitCode=0 Jan 28 17:34:44 crc kubenswrapper[4903]: I0128 17:34:44.908037 4903 generic.go:334] "Generic (PLEG): container finished" podID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerID="89bcde0676bd5d435916e30ef4e4d0617e1ea7c561a87a9d9929b05483a2c003" exitCode=0 Jan 28 17:34:44 crc kubenswrapper[4903]: I0128 17:34:44.908219 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerDied","Data":"ce138d823c04992b3a56c6ab756c7b78ab2d1575d445894a611de2a5a3d3753a"} Jan 28 17:34:44 crc kubenswrapper[4903]: I0128 17:34:44.908336 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerDied","Data":"89bcde0676bd5d435916e30ef4e4d0617e1ea7c561a87a9d9929b05483a2c003"} Jan 28 17:34:44 crc kubenswrapper[4903]: I0128 17:34:44.913164 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 17:34:46 crc kubenswrapper[4903]: I0128 17:34:46.929392 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerStarted","Data":"7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7"} Jan 28 17:34:46 crc kubenswrapper[4903]: I0128 17:34:46.929988 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 17:34:46 crc kubenswrapper[4903]: I0128 17:34:46.929767 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="sg-core" containerID="cri-o://37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d" gracePeriod=30 Jan 28 17:34:46 crc kubenswrapper[4903]: I0128 17:34:46.929591 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-central-agent" containerID="cri-o://d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83" gracePeriod=30 Jan 28 17:34:46 crc kubenswrapper[4903]: I0128 17:34:46.929851 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-notification-agent" containerID="cri-o://95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af" gracePeriod=30 Jan 28 17:34:46 crc kubenswrapper[4903]: I0128 17:34:46.929842 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="proxy-httpd" containerID="cri-o://7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7" gracePeriod=30 Jan 28 17:34:47 crc kubenswrapper[4903]: I0128 17:34:47.941619 4903 generic.go:334] "Generic (PLEG): container finished" podID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerID="7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7" exitCode=0 Jan 28 17:34:47 crc kubenswrapper[4903]: I0128 17:34:47.941998 4903 generic.go:334] "Generic (PLEG): container finished" podID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerID="37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d" exitCode=2 Jan 28 17:34:47 crc kubenswrapper[4903]: I0128 17:34:47.942012 4903 generic.go:334] "Generic (PLEG): container finished" podID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerID="95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af" exitCode=0 Jan 28 17:34:47 crc kubenswrapper[4903]: I0128 17:34:47.941705 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerDied","Data":"7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7"} Jan 28 17:34:47 crc kubenswrapper[4903]: I0128 17:34:47.942059 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerDied","Data":"37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d"} Jan 28 17:34:47 crc kubenswrapper[4903]: I0128 17:34:47.942079 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerDied","Data":"95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af"} Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.580000 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.699879 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-sg-core-conf-yaml\") pod \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.699983 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-scripts\") pod \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.700070 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-config-data\") pod \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.700164 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-log-httpd\") pod \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.700214 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p57l\" (UniqueName: \"kubernetes.io/projected/b941052e-c9c0-4005-8a1e-a30d21da0dbc-kube-api-access-7p57l\") pod \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.700252 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-combined-ca-bundle\") pod \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.700301 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-run-httpd\") pod \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\" (UID: \"b941052e-c9c0-4005-8a1e-a30d21da0dbc\") " Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.700873 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b941052e-c9c0-4005-8a1e-a30d21da0dbc" (UID: "b941052e-c9c0-4005-8a1e-a30d21da0dbc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.700989 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b941052e-c9c0-4005-8a1e-a30d21da0dbc" (UID: "b941052e-c9c0-4005-8a1e-a30d21da0dbc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.719208 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b941052e-c9c0-4005-8a1e-a30d21da0dbc-kube-api-access-7p57l" (OuterVolumeSpecName: "kube-api-access-7p57l") pod "b941052e-c9c0-4005-8a1e-a30d21da0dbc" (UID: "b941052e-c9c0-4005-8a1e-a30d21da0dbc"). InnerVolumeSpecName "kube-api-access-7p57l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.726154 4903 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.726217 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p57l\" (UniqueName: \"kubernetes.io/projected/b941052e-c9c0-4005-8a1e-a30d21da0dbc-kube-api-access-7p57l\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.726238 4903 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b941052e-c9c0-4005-8a1e-a30d21da0dbc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.728090 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-scripts" (OuterVolumeSpecName: "scripts") pod "b941052e-c9c0-4005-8a1e-a30d21da0dbc" (UID: "b941052e-c9c0-4005-8a1e-a30d21da0dbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.759703 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b941052e-c9c0-4005-8a1e-a30d21da0dbc" (UID: "b941052e-c9c0-4005-8a1e-a30d21da0dbc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.828300 4903 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.828619 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.836503 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b941052e-c9c0-4005-8a1e-a30d21da0dbc" (UID: "b941052e-c9c0-4005-8a1e-a30d21da0dbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.879253 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-config-data" (OuterVolumeSpecName: "config-data") pod "b941052e-c9c0-4005-8a1e-a30d21da0dbc" (UID: "b941052e-c9c0-4005-8a1e-a30d21da0dbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.931958 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.932005 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b941052e-c9c0-4005-8a1e-a30d21da0dbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.961817 4903 generic.go:334] "Generic (PLEG): container finished" podID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerID="d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83" exitCode=0 Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.961925 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.961953 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerDied","Data":"d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83"} Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.962625 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b941052e-c9c0-4005-8a1e-a30d21da0dbc","Type":"ContainerDied","Data":"2133d6271d65076726aac352bffd86c8fc900fcee3b67393bcf3dfcce4849ed7"} Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.962658 4903 scope.go:117] "RemoveContainer" containerID="7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7" Jan 28 17:34:49 crc kubenswrapper[4903]: I0128 17:34:49.990036 4903 scope.go:117] "RemoveContainer" containerID="37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.013770 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.019216 4903 scope.go:117] "RemoveContainer" containerID="95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.039373 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.050867 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.054745 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="sg-core" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.054792 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="sg-core" Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.054869 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-central-agent" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.054880 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-central-agent" Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.054916 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-notification-agent" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.054924 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-notification-agent" Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.054934 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="proxy-httpd" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.054968 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="proxy-httpd" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.055316 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="proxy-httpd" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.055372 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-central-agent" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.055393 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="sg-core" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.055418 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" containerName="ceilometer-notification-agent" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.058091 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.060267 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.061305 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.061484 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.061494 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.065766 4903 scope.go:117] "RemoveContainer" containerID="d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.099520 4903 scope.go:117] "RemoveContainer" containerID="7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7" Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.100473 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7\": container with ID starting with 7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7 not found: ID does not exist" containerID="7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.100510 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7"} err="failed to get container status \"7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7\": rpc error: code = NotFound desc = could not find container \"7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7\": container with ID starting with 7b32aa1f219a82ae6bd9fd7bc5288118308b99f5e156c6f8c0cb13967e5e7ba7 not found: ID does not exist" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.100546 4903 scope.go:117] "RemoveContainer" containerID="37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d" Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.101903 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d\": container with ID starting with 37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d not found: ID does not exist" containerID="37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.101929 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d"} err="failed to get container status \"37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d\": rpc error: code = NotFound desc = could not find container \"37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d\": container with ID starting with 37445f44e807bdfe4683fc65db624d1d7714b0d5420ca726b9f1735e5a72f45d not found: ID does not exist" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.101946 4903 scope.go:117] "RemoveContainer" containerID="95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af" Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.102518 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af\": container with ID starting with 95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af not found: ID does not exist" containerID="95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.102604 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af"} err="failed to get container status \"95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af\": rpc error: code = NotFound desc = could not find container \"95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af\": container with ID starting with 95d45b5e9cd41e5f8ff5d09561e3b72b18043a65452faba9eea51c3c8878f4af not found: ID does not exist" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.102618 4903 scope.go:117] "RemoveContainer" containerID="d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83" Jan 28 17:34:50 crc kubenswrapper[4903]: E0128 17:34:50.102949 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83\": container with ID starting with d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83 not found: ID does not exist" containerID="d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.102974 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83"} err="failed to get container status \"d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83\": rpc error: code = NotFound desc = could not find container \"d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83\": container with ID starting with d833f7a4fc07f596a4fb154390994b57cf21b736ea7ca5cd110cf915d81fda83 not found: ID does not exist" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.136141 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b88bde9-c7d6-4701-99b6-a319420105c7-log-httpd\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.136204 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.136241 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.136474 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-scripts\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.136670 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtpjg\" (UniqueName: \"kubernetes.io/projected/9b88bde9-c7d6-4701-99b6-a319420105c7-kube-api-access-dtpjg\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.136877 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-config-data\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.137092 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.137155 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b88bde9-c7d6-4701-99b6-a319420105c7-run-httpd\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.245611 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-scripts\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.245714 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtpjg\" (UniqueName: \"kubernetes.io/projected/9b88bde9-c7d6-4701-99b6-a319420105c7-kube-api-access-dtpjg\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.245849 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-config-data\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.246034 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.246103 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b88bde9-c7d6-4701-99b6-a319420105c7-run-httpd\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.246145 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.246167 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b88bde9-c7d6-4701-99b6-a319420105c7-log-httpd\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.246207 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.247293 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b88bde9-c7d6-4701-99b6-a319420105c7-run-httpd\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.247962 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b88bde9-c7d6-4701-99b6-a319420105c7-log-httpd\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.250136 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.250879 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-config-data\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.251627 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.251861 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-scripts\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.259969 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b88bde9-c7d6-4701-99b6-a319420105c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.266635 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtpjg\" (UniqueName: \"kubernetes.io/projected/9b88bde9-c7d6-4701-99b6-a319420105c7-kube-api-access-dtpjg\") pod \"ceilometer-0\" (UID: \"9b88bde9-c7d6-4701-99b6-a319420105c7\") " pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.384139 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.430440 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b941052e-c9c0-4005-8a1e-a30d21da0dbc" path="/var/lib/kubelet/pods/b941052e-c9c0-4005-8a1e-a30d21da0dbc/volumes" Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.888879 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 17:34:50 crc kubenswrapper[4903]: I0128 17:34:50.976329 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b88bde9-c7d6-4701-99b6-a319420105c7","Type":"ContainerStarted","Data":"884d7ced2de45b30479dc81a8b911c31c05c34e1555ec84554244495259f528e"} Jan 28 17:34:51 crc kubenswrapper[4903]: I0128 17:34:51.234602 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 17:34:51 crc kubenswrapper[4903]: I0128 17:34:51.986736 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b88bde9-c7d6-4701-99b6-a319420105c7","Type":"ContainerStarted","Data":"2dd28698257342735a5d72858df620a07714c340265408366c59ae433bc75f7d"} Jan 28 17:34:52 crc kubenswrapper[4903]: I0128 17:34:52.999009 4903 generic.go:334] "Generic (PLEG): container finished" podID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerID="2d79d899bc9ef3057ddb624bdf2d4bc4eb7867e6506fe6b9d2d72a665f658426" exitCode=0 Jan 28 17:34:53 crc kubenswrapper[4903]: I0128 17:34:52.999082 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerDied","Data":"2d79d899bc9ef3057ddb624bdf2d4bc4eb7867e6506fe6b9d2d72a665f658426"} Jan 28 17:34:53 crc kubenswrapper[4903]: I0128 17:34:53.001601 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b88bde9-c7d6-4701-99b6-a319420105c7","Type":"ContainerStarted","Data":"587b98b1a63f16356170167c2224885760fe0a202ea868eb8bcce274ba8c5872"} Jan 28 17:34:54 crc kubenswrapper[4903]: I0128 17:34:54.018573 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b88bde9-c7d6-4701-99b6-a319420105c7","Type":"ContainerStarted","Data":"60384af50193a12886ade8136d3cce0ceeadbad0cdb9aa21ea0202b6c75395cd"} Jan 28 17:34:56 crc kubenswrapper[4903]: I0128 17:34:56.413090 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:34:56 crc kubenswrapper[4903]: E0128 17:34:56.414001 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:34:57 crc kubenswrapper[4903]: I0128 17:34:57.059831 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b88bde9-c7d6-4701-99b6-a319420105c7","Type":"ContainerStarted","Data":"c693b9221ff2813cbffe8097816ad787c3c13885d92c4c7f8856978a30d591a3"} Jan 28 17:34:57 crc kubenswrapper[4903]: I0128 17:34:57.060884 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 17:34:57 crc kubenswrapper[4903]: I0128 17:34:57.101936 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.067502826 podStartE2EDuration="7.101913444s" podCreationTimestamp="2026-01-28 17:34:50 +0000 UTC" firstStartedPulling="2026-01-28 17:34:50.89340118 +0000 UTC m=+6563.169372691" lastFinishedPulling="2026-01-28 17:34:55.927811788 +0000 UTC m=+6568.203783309" observedRunningTime="2026-01-28 17:34:57.085704321 +0000 UTC m=+6569.361675842" watchObservedRunningTime="2026-01-28 17:34:57.101913444 +0000 UTC m=+6569.377884965" Jan 28 17:34:57 crc kubenswrapper[4903]: I0128 17:34:57.965712 4903 scope.go:117] "RemoveContainer" containerID="f5fca149b88cad5efb4fb921475d216877821372f8b8d6d8b242dda7665676dd" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.021274 4903 scope.go:117] "RemoveContainer" containerID="f3d88e81a1c88d8dfc98ce2b982579535fbb70753657bb71991f7570229545d3" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.053643 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-gg9sn"] Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.067673 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-gg9sn"] Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.091313 4903 scope.go:117] "RemoveContainer" containerID="2c3e434e6d47b47048281d066137238b3a673f0468580e514847931a66c0a462" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.153180 4903 scope.go:117] "RemoveContainer" containerID="ebc0ca2d53c97ece2aea016eb29096d2faf029a539004227c48bae623ffb0725" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.252901 4903 scope.go:117] "RemoveContainer" containerID="471e950b1c26f4c4c2ab6ec600d3214871c349092283656ef6d270179925205e" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.301550 4903 scope.go:117] "RemoveContainer" containerID="85a23c76bb3a1355f28a4831ce2ad54a729e7770b12866c43ba03ae93f690f9d" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.363715 4903 scope.go:117] "RemoveContainer" containerID="667caeb9cc9d975db0a662fabb5aa85b793c84a917c028cd94b04ab9b63a8b28" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.389196 4903 scope.go:117] "RemoveContainer" containerID="b1e4c7a0cd5eb3039528dc7bbab148af038399c68a6a7d965f317b1a9e4e7a9b" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.427092 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48dcc322-2413-4bfb-a717-25c8fcb8bebb" path="/var/lib/kubelet/pods/48dcc322-2413-4bfb-a717-25c8fcb8bebb/volumes" Jan 28 17:34:58 crc kubenswrapper[4903]: I0128 17:34:58.444124 4903 scope.go:117] "RemoveContainer" containerID="2e28ab3bff497c7adc168195dbe8594d9a9eb099bddfe5c07fd213901abee703" Jan 28 17:35:11 crc kubenswrapper[4903]: I0128 17:35:11.413779 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:35:11 crc kubenswrapper[4903]: E0128 17:35:11.414597 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.306587 4903 generic.go:334] "Generic (PLEG): container finished" podID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerID="53de8df146305aa905cd51af1ecb7da51ae25a9b6ea6a7a9dcc5f745407960c0" exitCode=137 Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.306628 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerDied","Data":"53de8df146305aa905cd51af1ecb7da51ae25a9b6ea6a7a9dcc5f745407960c0"} Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.544270 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.587660 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-scripts\") pod \"b9ebfc22-103b-43ca-849b-583ab7800d10\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.587794 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-combined-ca-bundle\") pod \"b9ebfc22-103b-43ca-849b-583ab7800d10\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.587936 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-config-data\") pod \"b9ebfc22-103b-43ca-849b-583ab7800d10\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.588072 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7hq2\" (UniqueName: \"kubernetes.io/projected/b9ebfc22-103b-43ca-849b-583ab7800d10-kube-api-access-f7hq2\") pod \"b9ebfc22-103b-43ca-849b-583ab7800d10\" (UID: \"b9ebfc22-103b-43ca-849b-583ab7800d10\") " Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.597124 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-scripts" (OuterVolumeSpecName: "scripts") pod "b9ebfc22-103b-43ca-849b-583ab7800d10" (UID: "b9ebfc22-103b-43ca-849b-583ab7800d10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.599970 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ebfc22-103b-43ca-849b-583ab7800d10-kube-api-access-f7hq2" (OuterVolumeSpecName: "kube-api-access-f7hq2") pod "b9ebfc22-103b-43ca-849b-583ab7800d10" (UID: "b9ebfc22-103b-43ca-849b-583ab7800d10"). InnerVolumeSpecName "kube-api-access-f7hq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.691191 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7hq2\" (UniqueName: \"kubernetes.io/projected/b9ebfc22-103b-43ca-849b-583ab7800d10-kube-api-access-f7hq2\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.691232 4903 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.734778 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-config-data" (OuterVolumeSpecName: "config-data") pod "b9ebfc22-103b-43ca-849b-583ab7800d10" (UID: "b9ebfc22-103b-43ca-849b-583ab7800d10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.768967 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9ebfc22-103b-43ca-849b-583ab7800d10" (UID: "b9ebfc22-103b-43ca-849b-583ab7800d10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.792858 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:14 crc kubenswrapper[4903]: I0128 17:35:14.792909 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ebfc22-103b-43ca-849b-583ab7800d10-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.345454 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b9ebfc22-103b-43ca-849b-583ab7800d10","Type":"ContainerDied","Data":"be003f4ccf2b1e2b620c1b17f3c33e7c7175972b478ef19ffb52196ee8e0447d"} Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.346100 4903 scope.go:117] "RemoveContainer" containerID="53de8df146305aa905cd51af1ecb7da51ae25a9b6ea6a7a9dcc5f745407960c0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.345631 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.398663 4903 scope.go:117] "RemoveContainer" containerID="2d79d899bc9ef3057ddb624bdf2d4bc4eb7867e6506fe6b9d2d72a665f658426" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.403875 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.428798 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.442594 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 17:35:15 crc kubenswrapper[4903]: E0128 17:35:15.443444 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-evaluator" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.443559 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-evaluator" Jan 28 17:35:15 crc kubenswrapper[4903]: E0128 17:35:15.443661 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-api" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.443737 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-api" Jan 28 17:35:15 crc kubenswrapper[4903]: E0128 17:35:15.443828 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-notifier" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.444092 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-notifier" Jan 28 17:35:15 crc kubenswrapper[4903]: E0128 17:35:15.444184 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-listener" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.444255 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-listener" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.444675 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-listener" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.444932 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-notifier" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.445029 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-api" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.445183 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" containerName="aodh-evaluator" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.445308 4903 scope.go:117] "RemoveContainer" containerID="ce138d823c04992b3a56c6ab756c7b78ab2d1575d445894a611de2a5a3d3753a" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.448021 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.453031 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.454166 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-mt4tm" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.454191 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.457366 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.458922 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.461242 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.508388 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9j2\" (UniqueName: \"kubernetes.io/projected/05f47480-f186-4d10-9260-084ec8f72134-kube-api-access-ds9j2\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.508455 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-internal-tls-certs\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.508488 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-scripts\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.508511 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.508562 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-config-data\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.508620 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-public-tls-certs\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.567802 4903 scope.go:117] "RemoveContainer" containerID="89bcde0676bd5d435916e30ef4e4d0617e1ea7c561a87a9d9929b05483a2c003" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.612946 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds9j2\" (UniqueName: \"kubernetes.io/projected/05f47480-f186-4d10-9260-084ec8f72134-kube-api-access-ds9j2\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.613018 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-internal-tls-certs\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.613043 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-scripts\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.613061 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.613087 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-config-data\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.613110 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-public-tls-certs\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.622213 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-internal-tls-certs\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.622291 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-combined-ca-bundle\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.622429 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-public-tls-certs\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.629318 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-config-data\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.632025 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds9j2\" (UniqueName: \"kubernetes.io/projected/05f47480-f186-4d10-9260-084ec8f72134-kube-api-access-ds9j2\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.633963 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05f47480-f186-4d10-9260-084ec8f72134-scripts\") pod \"aodh-0\" (UID: \"05f47480-f186-4d10-9260-084ec8f72134\") " pod="openstack/aodh-0" Jan 28 17:35:15 crc kubenswrapper[4903]: I0128 17:35:15.806786 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 17:35:16 crc kubenswrapper[4903]: I0128 17:35:16.427130 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ebfc22-103b-43ca-849b-583ab7800d10" path="/var/lib/kubelet/pods/b9ebfc22-103b-43ca-849b-583ab7800d10/volumes" Jan 28 17:35:16 crc kubenswrapper[4903]: I0128 17:35:16.572385 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 17:35:16 crc kubenswrapper[4903]: W0128 17:35:16.579403 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05f47480_f186_4d10_9260_084ec8f72134.slice/crio-f4e4bbcdad93372f774ca0f3a0abc8c53477d929fcd5f2e5005217b1aa4e3f3a WatchSource:0}: Error finding container f4e4bbcdad93372f774ca0f3a0abc8c53477d929fcd5f2e5005217b1aa4e3f3a: Status 404 returned error can't find the container with id f4e4bbcdad93372f774ca0f3a0abc8c53477d929fcd5f2e5005217b1aa4e3f3a Jan 28 17:35:17 crc kubenswrapper[4903]: I0128 17:35:17.368622 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05f47480-f186-4d10-9260-084ec8f72134","Type":"ContainerStarted","Data":"f4e4bbcdad93372f774ca0f3a0abc8c53477d929fcd5f2e5005217b1aa4e3f3a"} Jan 28 17:35:18 crc kubenswrapper[4903]: I0128 17:35:18.386065 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05f47480-f186-4d10-9260-084ec8f72134","Type":"ContainerStarted","Data":"9b13bd2626fc578c6873d47cc278b0f4fe2e209295974cbd61f54e7d4d21c4a0"} Jan 28 17:35:19 crc kubenswrapper[4903]: I0128 17:35:19.420338 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05f47480-f186-4d10-9260-084ec8f72134","Type":"ContainerStarted","Data":"67f24dab5a4afaff9c6ad847c1891b2ae61358f74524a822e0c6fadc149a9de3"} Jan 28 17:35:20 crc kubenswrapper[4903]: I0128 17:35:20.410347 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 17:35:20 crc kubenswrapper[4903]: I0128 17:35:20.443700 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05f47480-f186-4d10-9260-084ec8f72134","Type":"ContainerStarted","Data":"dede2565686f3eb01d5fda50998015f478ed6bc4095334ea771870b7b36e8b96"} Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.484210 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"05f47480-f186-4d10-9260-084ec8f72134","Type":"ContainerStarted","Data":"831ec06ab8eafbbe28cb0b23bba0ef5e67c25e73176ef411ad2a9055c446ab80"} Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.510852 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.074663022 podStartE2EDuration="8.51083524s" podCreationTimestamp="2026-01-28 17:35:15 +0000 UTC" firstStartedPulling="2026-01-28 17:35:16.589553349 +0000 UTC m=+6588.865524860" lastFinishedPulling="2026-01-28 17:35:23.025725567 +0000 UTC m=+6595.301697078" observedRunningTime="2026-01-28 17:35:23.509146115 +0000 UTC m=+6595.785117646" watchObservedRunningTime="2026-01-28 17:35:23.51083524 +0000 UTC m=+6595.786806751" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.731259 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4dc5bc4f-hqmmp"] Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.733517 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.744224 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.744280 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4dc5bc4f-hqmmp"] Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.831222 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-openstack-cell1\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.831291 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflfn\" (UniqueName: \"kubernetes.io/projected/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-kube-api-access-nflfn\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.831347 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-config\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.831461 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-sb\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.831502 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-dns-svc\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.831546 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-nb\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.933490 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-openstack-cell1\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.933608 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nflfn\" (UniqueName: \"kubernetes.io/projected/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-kube-api-access-nflfn\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.933636 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-config\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.933699 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-sb\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.933726 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-dns-svc\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.933740 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-nb\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.934800 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-nb\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.934797 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-dns-svc\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.934877 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-sb\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.935397 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-openstack-cell1\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.935733 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-config\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:23 crc kubenswrapper[4903]: I0128 17:35:23.953489 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nflfn\" (UniqueName: \"kubernetes.io/projected/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-kube-api-access-nflfn\") pod \"dnsmasq-dns-f4dc5bc4f-hqmmp\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:24 crc kubenswrapper[4903]: I0128 17:35:24.052370 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:24 crc kubenswrapper[4903]: I0128 17:35:24.895540 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4dc5bc4f-hqmmp"] Jan 28 17:35:25 crc kubenswrapper[4903]: I0128 17:35:25.413478 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:35:25 crc kubenswrapper[4903]: E0128 17:35:25.414068 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:35:25 crc kubenswrapper[4903]: I0128 17:35:25.553780 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" event={"ID":"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5","Type":"ContainerStarted","Data":"86ca1c98b7e11ddc525bf3205063a436a24cce129495c4930626b60f929b3cf0"} Jan 28 17:35:25 crc kubenswrapper[4903]: I0128 17:35:25.553835 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" event={"ID":"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5","Type":"ContainerStarted","Data":"70e1989645356db3c9673c506b6248b3f0b2c84f00a457f0976c6c1fa2692f78"} Jan 28 17:35:26 crc kubenswrapper[4903]: I0128 17:35:26.564071 4903 generic.go:334] "Generic (PLEG): container finished" podID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerID="86ca1c98b7e11ddc525bf3205063a436a24cce129495c4930626b60f929b3cf0" exitCode=0 Jan 28 17:35:26 crc kubenswrapper[4903]: I0128 17:35:26.564112 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" event={"ID":"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5","Type":"ContainerDied","Data":"86ca1c98b7e11ddc525bf3205063a436a24cce129495c4930626b60f929b3cf0"} Jan 28 17:35:27 crc kubenswrapper[4903]: I0128 17:35:27.582736 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" event={"ID":"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5","Type":"ContainerStarted","Data":"47935c8b2af3b6897945f165f9735727c23474a2b5b40918b59b58271c50b5ec"} Jan 28 17:35:27 crc kubenswrapper[4903]: I0128 17:35:27.583288 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:27 crc kubenswrapper[4903]: I0128 17:35:27.615146 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" podStartSLOduration=4.615128093 podStartE2EDuration="4.615128093s" podCreationTimestamp="2026-01-28 17:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:35:27.607283673 +0000 UTC m=+6599.883255184" watchObservedRunningTime="2026-01-28 17:35:27.615128093 +0000 UTC m=+6599.891099604" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.053724 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.120480 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d795b979-fg72n"] Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.120708 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54d795b979-fg72n" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerName="dnsmasq-dns" containerID="cri-o://d3121b1cc57323ec427df8a1b59c0e9ad4871a2f2dbde128889149873684441c" gracePeriod=10 Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.287665 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t"] Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.290003 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.307584 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t"] Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.412425 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xznr\" (UniqueName: \"kubernetes.io/projected/48c9360e-b3ed-41f7-a547-8abce5e78e96-kube-api-access-2xznr\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.412882 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-openstack-cell1\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.412939 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-dns-svc\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.422811 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.422896 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.422915 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-config\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.525147 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.525249 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.525269 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-config\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.525314 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xznr\" (UniqueName: \"kubernetes.io/projected/48c9360e-b3ed-41f7-a547-8abce5e78e96-kube-api-access-2xznr\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.525343 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-openstack-cell1\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.525379 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-dns-svc\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.526810 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.527436 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-dns-svc\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.527574 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.527999 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-openstack-cell1\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.528664 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c9360e-b3ed-41f7-a547-8abce5e78e96-config\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.547318 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xznr\" (UniqueName: \"kubernetes.io/projected/48c9360e-b3ed-41f7-a547-8abce5e78e96-kube-api-access-2xznr\") pod \"dnsmasq-dns-6d4cbb4ddf-pkv2t\" (UID: \"48c9360e-b3ed-41f7-a547-8abce5e78e96\") " pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.673169 4903 generic.go:334] "Generic (PLEG): container finished" podID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerID="d3121b1cc57323ec427df8a1b59c0e9ad4871a2f2dbde128889149873684441c" exitCode=0 Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.673235 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d795b979-fg72n" event={"ID":"68b6b21a-e766-4b5f-944f-ced63870b9c0","Type":"ContainerDied","Data":"d3121b1cc57323ec427df8a1b59c0e9ad4871a2f2dbde128889149873684441c"} Jan 28 17:35:34 crc kubenswrapper[4903]: I0128 17:35:34.695697 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.042925 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.141094 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-sb\") pod \"68b6b21a-e766-4b5f-944f-ced63870b9c0\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.141248 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-dns-svc\") pod \"68b6b21a-e766-4b5f-944f-ced63870b9c0\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.141279 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-config\") pod \"68b6b21a-e766-4b5f-944f-ced63870b9c0\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.141563 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2gkh\" (UniqueName: \"kubernetes.io/projected/68b6b21a-e766-4b5f-944f-ced63870b9c0-kube-api-access-s2gkh\") pod \"68b6b21a-e766-4b5f-944f-ced63870b9c0\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.141598 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-nb\") pod \"68b6b21a-e766-4b5f-944f-ced63870b9c0\" (UID: \"68b6b21a-e766-4b5f-944f-ced63870b9c0\") " Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.158061 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b6b21a-e766-4b5f-944f-ced63870b9c0-kube-api-access-s2gkh" (OuterVolumeSpecName: "kube-api-access-s2gkh") pod "68b6b21a-e766-4b5f-944f-ced63870b9c0" (UID: "68b6b21a-e766-4b5f-944f-ced63870b9c0"). InnerVolumeSpecName "kube-api-access-s2gkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.225408 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68b6b21a-e766-4b5f-944f-ced63870b9c0" (UID: "68b6b21a-e766-4b5f-944f-ced63870b9c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.227330 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68b6b21a-e766-4b5f-944f-ced63870b9c0" (UID: "68b6b21a-e766-4b5f-944f-ced63870b9c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.236283 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-config" (OuterVolumeSpecName: "config") pod "68b6b21a-e766-4b5f-944f-ced63870b9c0" (UID: "68b6b21a-e766-4b5f-944f-ced63870b9c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.242362 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68b6b21a-e766-4b5f-944f-ced63870b9c0" (UID: "68b6b21a-e766-4b5f-944f-ced63870b9c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.244302 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.244327 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.244342 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2gkh\" (UniqueName: \"kubernetes.io/projected/68b6b21a-e766-4b5f-944f-ced63870b9c0-kube-api-access-s2gkh\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.244376 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.244385 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68b6b21a-e766-4b5f-944f-ced63870b9c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.559235 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t"] Jan 28 17:35:35 crc kubenswrapper[4903]: W0128 17:35:35.563362 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48c9360e_b3ed_41f7_a547_8abce5e78e96.slice/crio-0f10f16ad50c67a15c61fd64b07e28066d9145674d79f940e7bed026527746f0 WatchSource:0}: Error finding container 0f10f16ad50c67a15c61fd64b07e28066d9145674d79f940e7bed026527746f0: Status 404 returned error can't find the container with id 0f10f16ad50c67a15c61fd64b07e28066d9145674d79f940e7bed026527746f0 Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.694414 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" event={"ID":"48c9360e-b3ed-41f7-a547-8abce5e78e96","Type":"ContainerStarted","Data":"0f10f16ad50c67a15c61fd64b07e28066d9145674d79f940e7bed026527746f0"} Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.703904 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d795b979-fg72n" event={"ID":"68b6b21a-e766-4b5f-944f-ced63870b9c0","Type":"ContainerDied","Data":"c79c7bb8eefc6f2fce0e7275703cfa0928e2e98820647d40eb6c6eb2134e7666"} Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.703965 4903 scope.go:117] "RemoveContainer" containerID="d3121b1cc57323ec427df8a1b59c0e9ad4871a2f2dbde128889149873684441c" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.704133 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d795b979-fg72n" Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.803593 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d795b979-fg72n"] Jan 28 17:35:35 crc kubenswrapper[4903]: I0128 17:35:35.827060 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54d795b979-fg72n"] Jan 28 17:35:36 crc kubenswrapper[4903]: I0128 17:35:36.415202 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:35:36 crc kubenswrapper[4903]: E0128 17:35:36.415434 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:35:36 crc kubenswrapper[4903]: I0128 17:35:36.432853 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" path="/var/lib/kubelet/pods/68b6b21a-e766-4b5f-944f-ced63870b9c0/volumes" Jan 28 17:35:36 crc kubenswrapper[4903]: I0128 17:35:36.506016 4903 scope.go:117] "RemoveContainer" containerID="684cf576c302cd66effade45a1073940d00ea204dfe3085a2be12f37d0964303" Jan 28 17:35:36 crc kubenswrapper[4903]: I0128 17:35:36.715988 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" event={"ID":"48c9360e-b3ed-41f7-a547-8abce5e78e96","Type":"ContainerStarted","Data":"4426ba5c755bf7042aecbb900824e312926369c899705709dd2d8f92d1354fb8"} Jan 28 17:35:37 crc kubenswrapper[4903]: I0128 17:35:37.731124 4903 generic.go:334] "Generic (PLEG): container finished" podID="48c9360e-b3ed-41f7-a547-8abce5e78e96" containerID="4426ba5c755bf7042aecbb900824e312926369c899705709dd2d8f92d1354fb8" exitCode=0 Jan 28 17:35:37 crc kubenswrapper[4903]: I0128 17:35:37.731178 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" event={"ID":"48c9360e-b3ed-41f7-a547-8abce5e78e96","Type":"ContainerDied","Data":"4426ba5c755bf7042aecbb900824e312926369c899705709dd2d8f92d1354fb8"} Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.165135 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh"] Jan 28 17:35:38 crc kubenswrapper[4903]: E0128 17:35:38.165922 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerName="dnsmasq-dns" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.165944 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerName="dnsmasq-dns" Jan 28 17:35:38 crc kubenswrapper[4903]: E0128 17:35:38.165955 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerName="init" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.165963 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerName="init" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.166260 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerName="dnsmasq-dns" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.167121 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.171773 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.181705 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.204990 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.211947 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.241313 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh"] Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.321386 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.321498 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvx8t\" (UniqueName: \"kubernetes.io/projected/6c1e7f43-edee-4c1a-8668-22c199cb09d6-kube-api-access-kvx8t\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.321573 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.321610 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.426788 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.426898 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvx8t\" (UniqueName: \"kubernetes.io/projected/6c1e7f43-edee-4c1a-8668-22c199cb09d6-kube-api-access-kvx8t\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.426936 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.426971 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.432160 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.439041 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.441333 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.460092 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvx8t\" (UniqueName: \"kubernetes.io/projected/6c1e7f43-edee-4c1a-8668-22c199cb09d6-kube-api-access-kvx8t\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.485063 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.746877 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" event={"ID":"48c9360e-b3ed-41f7-a547-8abce5e78e96","Type":"ContainerStarted","Data":"74462f2287ffbbc53375f2394346f08d227fd72ff3054af74331be427d7eef92"} Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.748355 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:38 crc kubenswrapper[4903]: I0128 17:35:38.777811 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" podStartSLOduration=4.777788574 podStartE2EDuration="4.777788574s" podCreationTimestamp="2026-01-28 17:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 17:35:38.76786976 +0000 UTC m=+6611.043841291" watchObservedRunningTime="2026-01-28 17:35:38.777788574 +0000 UTC m=+6611.053760085" Jan 28 17:35:39 crc kubenswrapper[4903]: W0128 17:35:39.452914 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c1e7f43_edee_4c1a_8668_22c199cb09d6.slice/crio-564f971a71f5ad680a4be0c8f3d76f4e43ee2d1d95ab4bfe1dd08c829380e421 WatchSource:0}: Error finding container 564f971a71f5ad680a4be0c8f3d76f4e43ee2d1d95ab4bfe1dd08c829380e421: Status 404 returned error can't find the container with id 564f971a71f5ad680a4be0c8f3d76f4e43ee2d1d95ab4bfe1dd08c829380e421 Jan 28 17:35:39 crc kubenswrapper[4903]: I0128 17:35:39.454646 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh"] Jan 28 17:35:39 crc kubenswrapper[4903]: I0128 17:35:39.587351 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54d795b979-fg72n" podUID="68b6b21a-e766-4b5f-944f-ced63870b9c0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.85:5353: i/o timeout" Jan 28 17:35:39 crc kubenswrapper[4903]: I0128 17:35:39.756076 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" event={"ID":"6c1e7f43-edee-4c1a-8668-22c199cb09d6","Type":"ContainerStarted","Data":"564f971a71f5ad680a4be0c8f3d76f4e43ee2d1d95ab4bfe1dd08c829380e421"} Jan 28 17:35:44 crc kubenswrapper[4903]: I0128 17:35:44.696775 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d4cbb4ddf-pkv2t" Jan 28 17:35:44 crc kubenswrapper[4903]: I0128 17:35:44.786296 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f4dc5bc4f-hqmmp"] Jan 28 17:35:44 crc kubenswrapper[4903]: I0128 17:35:44.786638 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="dnsmasq-dns" containerID="cri-o://47935c8b2af3b6897945f165f9735727c23474a2b5b40918b59b58271c50b5ec" gracePeriod=10 Jan 28 17:35:45 crc kubenswrapper[4903]: I0128 17:35:45.835009 4903 generic.go:334] "Generic (PLEG): container finished" podID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerID="47935c8b2af3b6897945f165f9735727c23474a2b5b40918b59b58271c50b5ec" exitCode=0 Jan 28 17:35:45 crc kubenswrapper[4903]: I0128 17:35:45.835076 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" event={"ID":"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5","Type":"ContainerDied","Data":"47935c8b2af3b6897945f165f9735727c23474a2b5b40918b59b58271c50b5ec"} Jan 28 17:35:47 crc kubenswrapper[4903]: I0128 17:35:47.413886 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:35:47 crc kubenswrapper[4903]: E0128 17:35:47.414627 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:35:49 crc kubenswrapper[4903]: I0128 17:35:49.053261 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.148:5353: connect: connection refused" Jan 28 17:35:54 crc kubenswrapper[4903]: I0128 17:35:54.053887 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.148:5353: connect: connection refused" Jan 28 17:35:55 crc kubenswrapper[4903]: E0128 17:35:55.429266 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 28 17:35:55 crc kubenswrapper[4903]: E0128 17:35:55.429662 4903 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 17:35:55 crc kubenswrapper[4903]: container &Container{Name:pre-adoption-validation-openstack-pre-adoption-openstack-cell1,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p osp.edpm.pre_adoption_validation -i pre-adoption-validation-openstack-pre-adoption-openstack-cell1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_CALLBACKS_ENABLED,Value:ansible.posix.profile_tasks,ValueFrom:nil,},EnvVar{Name:ANSIBLE_CALLBACK_RESULT_FORMAT,Value:yaml,ValueFrom:nil,},EnvVar{Name:ANSIBLE_FORCE_COLOR,Value:True,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DISPLAY_ARGS_TO_STDOUT,Value:True,ValueFrom:nil,},EnvVar{Name:ANSIBLE_SSH_ARGS,Value:-C -o ControlMaster=auto -o ControlPersist=80s,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY,Value:1,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 28 17:35:55 crc kubenswrapper[4903]: osp.edpm.pre_adoption_validation Jan 28 17:35:55 crc kubenswrapper[4903]: Jan 28 17:35:55 crc kubenswrapper[4903]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 28 17:35:55 crc kubenswrapper[4903]: edpm_override_hosts: openstack-cell1 Jan 28 17:35:55 crc kubenswrapper[4903]: edpm_service_type: pre-adoption-validation Jan 28 17:35:55 crc kubenswrapper[4903]: edpm_services_override: [pre-adoption-validation] Jan 28 17:35:55 crc kubenswrapper[4903]: Jan 28 17:35:55 crc kubenswrapper[4903]: Jan 28 17:35:55 crc kubenswrapper[4903]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:pre-adoption-validation-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/pre-adoption-validation,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-cell1,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-cell1,SubPath:ssh_key_openstack-cell1,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvx8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh_openstack(6c1e7f43-edee-4c1a-8668-22c199cb09d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 28 17:35:55 crc kubenswrapper[4903]: > logger="UnhandledError" Jan 28 17:35:55 crc kubenswrapper[4903]: E0128 17:35:55.430994 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pre-adoption-validation-openstack-pre-adoption-openstack-cell1\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" podUID="6c1e7f43-edee-4c1a-8668-22c199cb09d6" Jan 28 17:35:55 crc kubenswrapper[4903]: E0128 17:35:55.950635 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pre-adoption-validation-openstack-pre-adoption-openstack-cell1\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" podUID="6c1e7f43-edee-4c1a-8668-22c199cb09d6" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.280860 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.365934 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-config\") pod \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.366153 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-dns-svc\") pod \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.366212 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-nb\") pod \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.366350 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nflfn\" (UniqueName: \"kubernetes.io/projected/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-kube-api-access-nflfn\") pod \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.366411 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-openstack-cell1\") pod \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.366528 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-sb\") pod \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\" (UID: \"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5\") " Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.387544 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-kube-api-access-nflfn" (OuterVolumeSpecName: "kube-api-access-nflfn") pod "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" (UID: "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5"). InnerVolumeSpecName "kube-api-access-nflfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.433817 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" (UID: "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.437831 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" (UID: "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.441162 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" (UID: "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.443076 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-config" (OuterVolumeSpecName: "config") pod "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" (UID: "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.448248 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" (UID: "cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.468587 4903 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.468625 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.468641 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nflfn\" (UniqueName: \"kubernetes.io/projected/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-kube-api-access-nflfn\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.468656 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.468669 4903 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.468682 4903 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5-config\") on node \"crc\" DevicePath \"\"" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.954329 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" event={"ID":"cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5","Type":"ContainerDied","Data":"70e1989645356db3c9673c506b6248b3f0b2c84f00a457f0976c6c1fa2692f78"} Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.954377 4903 scope.go:117] "RemoveContainer" containerID="47935c8b2af3b6897945f165f9735727c23474a2b5b40918b59b58271c50b5ec" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.954418 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4dc5bc4f-hqmmp" Jan 28 17:35:56 crc kubenswrapper[4903]: I0128 17:35:56.999202 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f4dc5bc4f-hqmmp"] Jan 28 17:35:57 crc kubenswrapper[4903]: I0128 17:35:57.005236 4903 scope.go:117] "RemoveContainer" containerID="86ca1c98b7e11ddc525bf3205063a436a24cce129495c4930626b60f929b3cf0" Jan 28 17:35:57 crc kubenswrapper[4903]: I0128 17:35:57.008444 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f4dc5bc4f-hqmmp"] Jan 28 17:35:58 crc kubenswrapper[4903]: I0128 17:35:58.422310 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:35:58 crc kubenswrapper[4903]: E0128 17:35:58.424038 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:35:58 crc kubenswrapper[4903]: I0128 17:35:58.438654 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" path="/var/lib/kubelet/pods/cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5/volumes" Jan 28 17:35:58 crc kubenswrapper[4903]: I0128 17:35:58.813334 4903 scope.go:117] "RemoveContainer" containerID="bac783114cdd7d12e7cf7e386aebbce96e54ae3fa8bb9ced35922b06fa260eef" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.107506 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vqjr4"] Jan 28 17:36:09 crc kubenswrapper[4903]: E0128 17:36:09.108515 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="init" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.108555 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="init" Jan 28 17:36:09 crc kubenswrapper[4903]: E0128 17:36:09.108588 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="dnsmasq-dns" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.108595 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="dnsmasq-dns" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.108811 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfac5a1e-0c09-41d7-a4b7-0ef8afab73e5" containerName="dnsmasq-dns" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.110913 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.123559 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqjr4"] Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.158889 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-catalog-content\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.159220 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-utilities\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.159334 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgm94\" (UniqueName: \"kubernetes.io/projected/60e1d6fd-ed31-4387-8071-2a6762968aa1-kube-api-access-vgm94\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.261549 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-catalog-content\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.261756 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-utilities\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.261815 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgm94\" (UniqueName: \"kubernetes.io/projected/60e1d6fd-ed31-4387-8071-2a6762968aa1-kube-api-access-vgm94\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.262175 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-catalog-content\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.262209 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-utilities\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.285873 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgm94\" (UniqueName: \"kubernetes.io/projected/60e1d6fd-ed31-4387-8071-2a6762968aa1-kube-api-access-vgm94\") pod \"redhat-marketplace-vqjr4\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.413649 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:36:09 crc kubenswrapper[4903]: E0128 17:36:09.414041 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.441290 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:09 crc kubenswrapper[4903]: I0128 17:36:09.844855 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqjr4"] Jan 28 17:36:10 crc kubenswrapper[4903]: I0128 17:36:10.080557 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqjr4" event={"ID":"60e1d6fd-ed31-4387-8071-2a6762968aa1","Type":"ContainerStarted","Data":"689b1ee9bc2856e874444ab8f465c6f1fdb90ba71166ba2ec7d876e2808e826d"} Jan 28 17:36:11 crc kubenswrapper[4903]: I0128 17:36:11.096040 4903 generic.go:334] "Generic (PLEG): container finished" podID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerID="cb471616ca5ce78b8635a055ad7be40a005ac8e102742e8ab964e50009c31682" exitCode=0 Jan 28 17:36:11 crc kubenswrapper[4903]: I0128 17:36:11.096490 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqjr4" event={"ID":"60e1d6fd-ed31-4387-8071-2a6762968aa1","Type":"ContainerDied","Data":"cb471616ca5ce78b8635a055ad7be40a005ac8e102742e8ab964e50009c31682"} Jan 28 17:36:13 crc kubenswrapper[4903]: I0128 17:36:13.114280 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" event={"ID":"6c1e7f43-edee-4c1a-8668-22c199cb09d6","Type":"ContainerStarted","Data":"7d04b6bc0816df55d45678e8edb0f0cbd02fed2e869fd353c7f08d858335538a"} Jan 28 17:36:13 crc kubenswrapper[4903]: I0128 17:36:13.126296 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqjr4" event={"ID":"60e1d6fd-ed31-4387-8071-2a6762968aa1","Type":"ContainerStarted","Data":"5e0236a69805736669ba4f0aaa186e9a315df4a5f7cfc508890aac282c4ed711"} Jan 28 17:36:13 crc kubenswrapper[4903]: I0128 17:36:13.170736 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" podStartSLOduration=2.1797892 podStartE2EDuration="35.170714154s" podCreationTimestamp="2026-01-28 17:35:38 +0000 UTC" firstStartedPulling="2026-01-28 17:35:39.454988452 +0000 UTC m=+6611.730959963" lastFinishedPulling="2026-01-28 17:36:12.445913406 +0000 UTC m=+6644.721884917" observedRunningTime="2026-01-28 17:36:13.140308022 +0000 UTC m=+6645.416279553" watchObservedRunningTime="2026-01-28 17:36:13.170714154 +0000 UTC m=+6645.446685665" Jan 28 17:36:18 crc kubenswrapper[4903]: I0128 17:36:18.177583 4903 generic.go:334] "Generic (PLEG): container finished" podID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerID="5e0236a69805736669ba4f0aaa186e9a315df4a5f7cfc508890aac282c4ed711" exitCode=0 Jan 28 17:36:18 crc kubenswrapper[4903]: I0128 17:36:18.177695 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqjr4" event={"ID":"60e1d6fd-ed31-4387-8071-2a6762968aa1","Type":"ContainerDied","Data":"5e0236a69805736669ba4f0aaa186e9a315df4a5f7cfc508890aac282c4ed711"} Jan 28 17:36:19 crc kubenswrapper[4903]: I0128 17:36:19.190146 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqjr4" event={"ID":"60e1d6fd-ed31-4387-8071-2a6762968aa1","Type":"ContainerStarted","Data":"c99d79bf19b7e9200297cb16b37716db2d9f17efda1b2396966e4a1e111748c8"} Jan 28 17:36:19 crc kubenswrapper[4903]: I0128 17:36:19.210652 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vqjr4" podStartSLOduration=2.653405771 podStartE2EDuration="10.21063186s" podCreationTimestamp="2026-01-28 17:36:09 +0000 UTC" firstStartedPulling="2026-01-28 17:36:11.100554031 +0000 UTC m=+6643.376525542" lastFinishedPulling="2026-01-28 17:36:18.65778012 +0000 UTC m=+6650.933751631" observedRunningTime="2026-01-28 17:36:19.207102706 +0000 UTC m=+6651.483074217" watchObservedRunningTime="2026-01-28 17:36:19.21063186 +0000 UTC m=+6651.486603371" Jan 28 17:36:19 crc kubenswrapper[4903]: I0128 17:36:19.442044 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:19 crc kubenswrapper[4903]: I0128 17:36:19.442106 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:20 crc kubenswrapper[4903]: I0128 17:36:20.414371 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:36:20 crc kubenswrapper[4903]: E0128 17:36:20.415147 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:36:20 crc kubenswrapper[4903]: I0128 17:36:20.494235 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vqjr4" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="registry-server" probeResult="failure" output=< Jan 28 17:36:20 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:36:20 crc kubenswrapper[4903]: > Jan 28 17:36:27 crc kubenswrapper[4903]: I0128 17:36:27.272768 4903 generic.go:334] "Generic (PLEG): container finished" podID="6c1e7f43-edee-4c1a-8668-22c199cb09d6" containerID="7d04b6bc0816df55d45678e8edb0f0cbd02fed2e869fd353c7f08d858335538a" exitCode=0 Jan 28 17:36:27 crc kubenswrapper[4903]: I0128 17:36:27.272862 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" event={"ID":"6c1e7f43-edee-4c1a-8668-22c199cb09d6","Type":"ContainerDied","Data":"7d04b6bc0816df55d45678e8edb0f0cbd02fed2e869fd353c7f08d858335538a"} Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.773671 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.826111 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-pre-adoption-validation-combined-ca-bundle\") pod \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.826162 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-inventory\") pod \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.826257 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-ssh-key-openstack-cell1\") pod \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.826433 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvx8t\" (UniqueName: \"kubernetes.io/projected/6c1e7f43-edee-4c1a-8668-22c199cb09d6-kube-api-access-kvx8t\") pod \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\" (UID: \"6c1e7f43-edee-4c1a-8668-22c199cb09d6\") " Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.831930 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "6c1e7f43-edee-4c1a-8668-22c199cb09d6" (UID: "6c1e7f43-edee-4c1a-8668-22c199cb09d6"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.832048 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1e7f43-edee-4c1a-8668-22c199cb09d6-kube-api-access-kvx8t" (OuterVolumeSpecName: "kube-api-access-kvx8t") pod "6c1e7f43-edee-4c1a-8668-22c199cb09d6" (UID: "6c1e7f43-edee-4c1a-8668-22c199cb09d6"). InnerVolumeSpecName "kube-api-access-kvx8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.861323 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-inventory" (OuterVolumeSpecName: "inventory") pod "6c1e7f43-edee-4c1a-8668-22c199cb09d6" (UID: "6c1e7f43-edee-4c1a-8668-22c199cb09d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.867767 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "6c1e7f43-edee-4c1a-8668-22c199cb09d6" (UID: "6c1e7f43-edee-4c1a-8668-22c199cb09d6"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.929115 4903 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.929147 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.929160 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/6c1e7f43-edee-4c1a-8668-22c199cb09d6-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:28 crc kubenswrapper[4903]: I0128 17:36:28.929169 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvx8t\" (UniqueName: \"kubernetes.io/projected/6c1e7f43-edee-4c1a-8668-22c199cb09d6-kube-api-access-kvx8t\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:29 crc kubenswrapper[4903]: I0128 17:36:29.293699 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" event={"ID":"6c1e7f43-edee-4c1a-8668-22c199cb09d6","Type":"ContainerDied","Data":"564f971a71f5ad680a4be0c8f3d76f4e43ee2d1d95ab4bfe1dd08c829380e421"} Jan 28 17:36:29 crc kubenswrapper[4903]: I0128 17:36:29.293762 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564f971a71f5ad680a4be0c8f3d76f4e43ee2d1d95ab4bfe1dd08c829380e421" Jan 28 17:36:29 crc kubenswrapper[4903]: I0128 17:36:29.293765 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cwszqh" Jan 28 17:36:29 crc kubenswrapper[4903]: I0128 17:36:29.543453 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:29 crc kubenswrapper[4903]: I0128 17:36:29.599661 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:29 crc kubenswrapper[4903]: I0128 17:36:29.783778 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqjr4"] Jan 28 17:36:31 crc kubenswrapper[4903]: I0128 17:36:31.310224 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vqjr4" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="registry-server" containerID="cri-o://c99d79bf19b7e9200297cb16b37716db2d9f17efda1b2396966e4a1e111748c8" gracePeriod=2 Jan 28 17:36:31 crc kubenswrapper[4903]: I0128 17:36:31.414260 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:36:31 crc kubenswrapper[4903]: E0128 17:36:31.414521 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.326068 4903 generic.go:334] "Generic (PLEG): container finished" podID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerID="c99d79bf19b7e9200297cb16b37716db2d9f17efda1b2396966e4a1e111748c8" exitCode=0 Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.326275 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqjr4" event={"ID":"60e1d6fd-ed31-4387-8071-2a6762968aa1","Type":"ContainerDied","Data":"c99d79bf19b7e9200297cb16b37716db2d9f17efda1b2396966e4a1e111748c8"} Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.326450 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqjr4" event={"ID":"60e1d6fd-ed31-4387-8071-2a6762968aa1","Type":"ContainerDied","Data":"689b1ee9bc2856e874444ab8f465c6f1fdb90ba71166ba2ec7d876e2808e826d"} Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.326471 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689b1ee9bc2856e874444ab8f465c6f1fdb90ba71166ba2ec7d876e2808e826d" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.416176 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.517315 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-utilities\") pod \"60e1d6fd-ed31-4387-8071-2a6762968aa1\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.517502 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-catalog-content\") pod \"60e1d6fd-ed31-4387-8071-2a6762968aa1\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.517573 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgm94\" (UniqueName: \"kubernetes.io/projected/60e1d6fd-ed31-4387-8071-2a6762968aa1-kube-api-access-vgm94\") pod \"60e1d6fd-ed31-4387-8071-2a6762968aa1\" (UID: \"60e1d6fd-ed31-4387-8071-2a6762968aa1\") " Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.519570 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-utilities" (OuterVolumeSpecName: "utilities") pod "60e1d6fd-ed31-4387-8071-2a6762968aa1" (UID: "60e1d6fd-ed31-4387-8071-2a6762968aa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.526517 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e1d6fd-ed31-4387-8071-2a6762968aa1-kube-api-access-vgm94" (OuterVolumeSpecName: "kube-api-access-vgm94") pod "60e1d6fd-ed31-4387-8071-2a6762968aa1" (UID: "60e1d6fd-ed31-4387-8071-2a6762968aa1"). InnerVolumeSpecName "kube-api-access-vgm94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.546679 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60e1d6fd-ed31-4387-8071-2a6762968aa1" (UID: "60e1d6fd-ed31-4387-8071-2a6762968aa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.621093 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.621135 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60e1d6fd-ed31-4387-8071-2a6762968aa1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:32 crc kubenswrapper[4903]: I0128 17:36:32.621147 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgm94\" (UniqueName: \"kubernetes.io/projected/60e1d6fd-ed31-4387-8071-2a6762968aa1-kube-api-access-vgm94\") on node \"crc\" DevicePath \"\"" Jan 28 17:36:33 crc kubenswrapper[4903]: I0128 17:36:33.335131 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqjr4" Jan 28 17:36:33 crc kubenswrapper[4903]: I0128 17:36:33.375027 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqjr4"] Jan 28 17:36:33 crc kubenswrapper[4903]: I0128 17:36:33.386917 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqjr4"] Jan 28 17:36:34 crc kubenswrapper[4903]: I0128 17:36:34.426965 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" path="/var/lib/kubelet/pods/60e1d6fd-ed31-4387-8071-2a6762968aa1/volumes" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.996792 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z"] Jan 28 17:36:35 crc kubenswrapper[4903]: E0128 17:36:35.997806 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1e7f43-edee-4c1a-8668-22c199cb09d6" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.998083 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1e7f43-edee-4c1a-8668-22c199cb09d6" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 28 17:36:35 crc kubenswrapper[4903]: E0128 17:36:35.998108 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="extract-utilities" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.998119 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="extract-utilities" Jan 28 17:36:35 crc kubenswrapper[4903]: E0128 17:36:35.998146 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="registry-server" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.998154 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="registry-server" Jan 28 17:36:35 crc kubenswrapper[4903]: E0128 17:36:35.998181 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="extract-content" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.998190 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="extract-content" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.998431 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e1d6fd-ed31-4387-8071-2a6762968aa1" containerName="registry-server" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.998473 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1e7f43-edee-4c1a-8668-22c199cb09d6" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 28 17:36:35 crc kubenswrapper[4903]: I0128 17:36:35.999588 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.004319 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.004457 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.005111 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.005995 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.010492 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z"] Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.096107 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.096154 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.096321 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.096357 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wbx\" (UniqueName: \"kubernetes.io/projected/1058a0d0-b6fe-458c-95f6-ab19e47c2043-kube-api-access-s7wbx\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.199137 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.199214 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wbx\" (UniqueName: \"kubernetes.io/projected/1058a0d0-b6fe-458c-95f6-ab19e47c2043-kube-api-access-s7wbx\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.199322 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.199341 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.205639 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.208217 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.210898 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.220352 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wbx\" (UniqueName: \"kubernetes.io/projected/1058a0d0-b6fe-458c-95f6-ab19e47c2043-kube-api-access-s7wbx\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.329152 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:36:36 crc kubenswrapper[4903]: I0128 17:36:36.927294 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z"] Jan 28 17:36:36 crc kubenswrapper[4903]: W0128 17:36:36.931940 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1058a0d0_b6fe_458c_95f6_ab19e47c2043.slice/crio-057dfff82b590a44ba5da692181e33a69f4cac5c500224592b1216a4c999b450 WatchSource:0}: Error finding container 057dfff82b590a44ba5da692181e33a69f4cac5c500224592b1216a4c999b450: Status 404 returned error can't find the container with id 057dfff82b590a44ba5da692181e33a69f4cac5c500224592b1216a4c999b450 Jan 28 17:36:37 crc kubenswrapper[4903]: I0128 17:36:37.380326 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" event={"ID":"1058a0d0-b6fe-458c-95f6-ab19e47c2043","Type":"ContainerStarted","Data":"057dfff82b590a44ba5da692181e33a69f4cac5c500224592b1216a4c999b450"} Jan 28 17:36:39 crc kubenswrapper[4903]: I0128 17:36:39.402323 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" event={"ID":"1058a0d0-b6fe-458c-95f6-ab19e47c2043","Type":"ContainerStarted","Data":"55ad7b7905c47050b5777f185aef537948745ca7847376dfc75dd0d88f5f47b9"} Jan 28 17:36:39 crc kubenswrapper[4903]: I0128 17:36:39.447277 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" podStartSLOduration=3.080636777 podStartE2EDuration="4.447258328s" podCreationTimestamp="2026-01-28 17:36:35 +0000 UTC" firstStartedPulling="2026-01-28 17:36:36.934581909 +0000 UTC m=+6669.210553420" lastFinishedPulling="2026-01-28 17:36:38.30120346 +0000 UTC m=+6670.577174971" observedRunningTime="2026-01-28 17:36:39.438432913 +0000 UTC m=+6671.714404424" watchObservedRunningTime="2026-01-28 17:36:39.447258328 +0000 UTC m=+6671.723229839" Jan 28 17:36:44 crc kubenswrapper[4903]: I0128 17:36:44.413941 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:36:44 crc kubenswrapper[4903]: E0128 17:36:44.414746 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:36:46 crc kubenswrapper[4903]: I0128 17:36:46.044784 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-vch6z"] Jan 28 17:36:46 crc kubenswrapper[4903]: I0128 17:36:46.052687 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-3c95-account-create-update-wwn2s"] Jan 28 17:36:46 crc kubenswrapper[4903]: I0128 17:36:46.064469 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-vch6z"] Jan 28 17:36:46 crc kubenswrapper[4903]: I0128 17:36:46.073438 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-3c95-account-create-update-wwn2s"] Jan 28 17:36:46 crc kubenswrapper[4903]: I0128 17:36:46.425947 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86350aa2-f96f-4ef9-9972-59ceda005637" path="/var/lib/kubelet/pods/86350aa2-f96f-4ef9-9972-59ceda005637/volumes" Jan 28 17:36:46 crc kubenswrapper[4903]: I0128 17:36:46.427105 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="895bdd55-2240-428e-9fad-4449bb7cbe36" path="/var/lib/kubelet/pods/895bdd55-2240-428e-9fad-4449bb7cbe36/volumes" Jan 28 17:36:52 crc kubenswrapper[4903]: I0128 17:36:52.038394 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-wp9mh"] Jan 28 17:36:52 crc kubenswrapper[4903]: I0128 17:36:52.047644 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-wp9mh"] Jan 28 17:36:52 crc kubenswrapper[4903]: I0128 17:36:52.425663 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141f08f5-50f7-429e-bc31-888f86f1a477" path="/var/lib/kubelet/pods/141f08f5-50f7-429e-bc31-888f86f1a477/volumes" Jan 28 17:36:53 crc kubenswrapper[4903]: I0128 17:36:53.034056 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-2943-account-create-update-8fdqj"] Jan 28 17:36:53 crc kubenswrapper[4903]: I0128 17:36:53.043806 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-2943-account-create-update-8fdqj"] Jan 28 17:36:54 crc kubenswrapper[4903]: I0128 17:36:54.472708 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446f97b9-ee08-4b89-8fe6-e17021aaa142" path="/var/lib/kubelet/pods/446f97b9-ee08-4b89-8fe6-e17021aaa142/volumes" Jan 28 17:36:56 crc kubenswrapper[4903]: I0128 17:36:56.414410 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:36:56 crc kubenswrapper[4903]: E0128 17:36:56.415201 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:36:58 crc kubenswrapper[4903]: I0128 17:36:58.976948 4903 scope.go:117] "RemoveContainer" containerID="902f80c47ecd4b91ae9f13cc504de58fcd5b0b801cf945e4057e244115f17105" Jan 28 17:36:59 crc kubenswrapper[4903]: I0128 17:36:59.027459 4903 scope.go:117] "RemoveContainer" containerID="bc604d076fa8fd376e7afdd81c370e9d350ea00e554e1ea7bcebf24eaba28cb8" Jan 28 17:36:59 crc kubenswrapper[4903]: I0128 17:36:59.076817 4903 scope.go:117] "RemoveContainer" containerID="553255b762b13ecc48a3dcd82a8c6eed3a34ca56ff26142e2c5d40f87d8baea5" Jan 28 17:36:59 crc kubenswrapper[4903]: I0128 17:36:59.137123 4903 scope.go:117] "RemoveContainer" containerID="071c625799897330b9ecdbf334d7c3678096acc450661f473d270bdb460d3b99" Jan 28 17:37:09 crc kubenswrapper[4903]: I0128 17:37:09.414848 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:37:09 crc kubenswrapper[4903]: E0128 17:37:09.417584 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:37:24 crc kubenswrapper[4903]: I0128 17:37:24.414294 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:37:24 crc kubenswrapper[4903]: E0128 17:37:24.415281 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:37:38 crc kubenswrapper[4903]: I0128 17:37:38.422335 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:37:39 crc kubenswrapper[4903]: I0128 17:37:39.044680 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-cmhx7"] Jan 28 17:37:39 crc kubenswrapper[4903]: I0128 17:37:39.055568 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-cmhx7"] Jan 28 17:37:39 crc kubenswrapper[4903]: I0128 17:37:39.167332 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"d7af3753e515febbe71c10e97c6ff6f7c034bc5e56b78fb1a144c96585e23f42"} Jan 28 17:37:40 crc kubenswrapper[4903]: I0128 17:37:40.427758 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e" path="/var/lib/kubelet/pods/a5fabfbb-bf98-4cb3-a3b6-1703f6b1317e/volumes" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.629144 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g7kks"] Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.632823 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.644337 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7kks"] Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.787085 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-catalog-content\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.787658 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-utilities\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.787732 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwt5g\" (UniqueName: \"kubernetes.io/projected/96a701b8-16e8-4cda-afb0-46f577a6b53d-kube-api-access-xwt5g\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.890935 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwt5g\" (UniqueName: \"kubernetes.io/projected/96a701b8-16e8-4cda-afb0-46f577a6b53d-kube-api-access-xwt5g\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.891114 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-catalog-content\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.891186 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-utilities\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.891886 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-utilities\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.892561 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-catalog-content\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.914960 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwt5g\" (UniqueName: \"kubernetes.io/projected/96a701b8-16e8-4cda-afb0-46f577a6b53d-kube-api-access-xwt5g\") pod \"community-operators-g7kks\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:46 crc kubenswrapper[4903]: I0128 17:37:46.959921 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:47 crc kubenswrapper[4903]: I0128 17:37:47.517174 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7kks"] Jan 28 17:37:48 crc kubenswrapper[4903]: I0128 17:37:48.246551 4903 generic.go:334] "Generic (PLEG): container finished" podID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerID="d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce" exitCode=0 Jan 28 17:37:48 crc kubenswrapper[4903]: I0128 17:37:48.246684 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7kks" event={"ID":"96a701b8-16e8-4cda-afb0-46f577a6b53d","Type":"ContainerDied","Data":"d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce"} Jan 28 17:37:48 crc kubenswrapper[4903]: I0128 17:37:48.246877 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7kks" event={"ID":"96a701b8-16e8-4cda-afb0-46f577a6b53d","Type":"ContainerStarted","Data":"0dc3974b2963a95d57e55d8fa0e76299ecf27f129855c04fb7e7c4d593c7a495"} Jan 28 17:37:48 crc kubenswrapper[4903]: I0128 17:37:48.251377 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:37:49 crc kubenswrapper[4903]: I0128 17:37:49.258113 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7kks" event={"ID":"96a701b8-16e8-4cda-afb0-46f577a6b53d","Type":"ContainerStarted","Data":"117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55"} Jan 28 17:37:52 crc kubenswrapper[4903]: I0128 17:37:52.289104 4903 generic.go:334] "Generic (PLEG): container finished" podID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerID="117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55" exitCode=0 Jan 28 17:37:52 crc kubenswrapper[4903]: I0128 17:37:52.289748 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7kks" event={"ID":"96a701b8-16e8-4cda-afb0-46f577a6b53d","Type":"ContainerDied","Data":"117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55"} Jan 28 17:37:54 crc kubenswrapper[4903]: I0128 17:37:54.327888 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7kks" event={"ID":"96a701b8-16e8-4cda-afb0-46f577a6b53d","Type":"ContainerStarted","Data":"81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899"} Jan 28 17:37:54 crc kubenswrapper[4903]: I0128 17:37:54.380046 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g7kks" podStartSLOduration=3.356012825 podStartE2EDuration="8.380023846s" podCreationTimestamp="2026-01-28 17:37:46 +0000 UTC" firstStartedPulling="2026-01-28 17:37:48.251131786 +0000 UTC m=+6740.527103297" lastFinishedPulling="2026-01-28 17:37:53.275142807 +0000 UTC m=+6745.551114318" observedRunningTime="2026-01-28 17:37:54.351551856 +0000 UTC m=+6746.627523387" watchObservedRunningTime="2026-01-28 17:37:54.380023846 +0000 UTC m=+6746.655995357" Jan 28 17:37:56 crc kubenswrapper[4903]: I0128 17:37:56.960363 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:56 crc kubenswrapper[4903]: I0128 17:37:56.960756 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:37:58 crc kubenswrapper[4903]: I0128 17:37:58.018785 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g7kks" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="registry-server" probeResult="failure" output=< Jan 28 17:37:58 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:37:58 crc kubenswrapper[4903]: > Jan 28 17:37:59 crc kubenswrapper[4903]: I0128 17:37:59.287890 4903 scope.go:117] "RemoveContainer" containerID="f368d7e804df4ce989ce29411c561fc04a046233891fe69cb7a992cd1bd2df5d" Jan 28 17:37:59 crc kubenswrapper[4903]: I0128 17:37:59.318441 4903 scope.go:117] "RemoveContainer" containerID="0bf86165bbb46d121c5e6392ba6b328c0c3dd2fd07dda191f67558a3b04e5bba" Jan 28 17:37:59 crc kubenswrapper[4903]: I0128 17:37:59.421060 4903 scope.go:117] "RemoveContainer" containerID="163d9f47b2d4f34ce9af557c982458b7f7930de164c45bb24bebfb6a7b86f39d" Jan 28 17:38:07 crc kubenswrapper[4903]: I0128 17:38:07.015056 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:38:07 crc kubenswrapper[4903]: I0128 17:38:07.081188 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:38:07 crc kubenswrapper[4903]: I0128 17:38:07.256683 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7kks"] Jan 28 17:38:08 crc kubenswrapper[4903]: I0128 17:38:08.471010 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g7kks" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="registry-server" containerID="cri-o://81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899" gracePeriod=2 Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.002694 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.131439 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwt5g\" (UniqueName: \"kubernetes.io/projected/96a701b8-16e8-4cda-afb0-46f577a6b53d-kube-api-access-xwt5g\") pod \"96a701b8-16e8-4cda-afb0-46f577a6b53d\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.132144 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-catalog-content\") pod \"96a701b8-16e8-4cda-afb0-46f577a6b53d\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.132288 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-utilities\") pod \"96a701b8-16e8-4cda-afb0-46f577a6b53d\" (UID: \"96a701b8-16e8-4cda-afb0-46f577a6b53d\") " Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.133708 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-utilities" (OuterVolumeSpecName: "utilities") pod "96a701b8-16e8-4cda-afb0-46f577a6b53d" (UID: "96a701b8-16e8-4cda-afb0-46f577a6b53d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.144715 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a701b8-16e8-4cda-afb0-46f577a6b53d-kube-api-access-xwt5g" (OuterVolumeSpecName: "kube-api-access-xwt5g") pod "96a701b8-16e8-4cda-afb0-46f577a6b53d" (UID: "96a701b8-16e8-4cda-afb0-46f577a6b53d"). InnerVolumeSpecName "kube-api-access-xwt5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.198209 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96a701b8-16e8-4cda-afb0-46f577a6b53d" (UID: "96a701b8-16e8-4cda-afb0-46f577a6b53d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.235232 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwt5g\" (UniqueName: \"kubernetes.io/projected/96a701b8-16e8-4cda-afb0-46f577a6b53d-kube-api-access-xwt5g\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.235287 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.235297 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a701b8-16e8-4cda-afb0-46f577a6b53d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.481306 4903 generic.go:334] "Generic (PLEG): container finished" podID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerID="81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899" exitCode=0 Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.481352 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7kks" event={"ID":"96a701b8-16e8-4cda-afb0-46f577a6b53d","Type":"ContainerDied","Data":"81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899"} Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.481384 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7kks" event={"ID":"96a701b8-16e8-4cda-afb0-46f577a6b53d","Type":"ContainerDied","Data":"0dc3974b2963a95d57e55d8fa0e76299ecf27f129855c04fb7e7c4d593c7a495"} Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.481390 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7kks" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.481402 4903 scope.go:117] "RemoveContainer" containerID="81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.519657 4903 scope.go:117] "RemoveContainer" containerID="117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.519759 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7kks"] Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.532368 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g7kks"] Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.552293 4903 scope.go:117] "RemoveContainer" containerID="d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.609748 4903 scope.go:117] "RemoveContainer" containerID="81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899" Jan 28 17:38:09 crc kubenswrapper[4903]: E0128 17:38:09.610233 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899\": container with ID starting with 81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899 not found: ID does not exist" containerID="81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.610277 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899"} err="failed to get container status \"81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899\": rpc error: code = NotFound desc = could not find container \"81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899\": container with ID starting with 81f349a367f863fe068128fdbc3a093268ce35187efb9dc3821b84760bc89899 not found: ID does not exist" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.610311 4903 scope.go:117] "RemoveContainer" containerID="117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55" Jan 28 17:38:09 crc kubenswrapper[4903]: E0128 17:38:09.610812 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55\": container with ID starting with 117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55 not found: ID does not exist" containerID="117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.610844 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55"} err="failed to get container status \"117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55\": rpc error: code = NotFound desc = could not find container \"117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55\": container with ID starting with 117776291ef22ef2663ee5562a42b349cfc22f4bafb36e0c7faf15d60211ba55 not found: ID does not exist" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.610908 4903 scope.go:117] "RemoveContainer" containerID="d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce" Jan 28 17:38:09 crc kubenswrapper[4903]: E0128 17:38:09.611246 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce\": container with ID starting with d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce not found: ID does not exist" containerID="d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce" Jan 28 17:38:09 crc kubenswrapper[4903]: I0128 17:38:09.611300 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce"} err="failed to get container status \"d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce\": rpc error: code = NotFound desc = could not find container \"d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce\": container with ID starting with d1fb2f327b66b767b9c383ad0ee5ef6e04579463feb3d6f1a6965abca17351ce not found: ID does not exist" Jan 28 17:38:10 crc kubenswrapper[4903]: I0128 17:38:10.429580 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" path="/var/lib/kubelet/pods/96a701b8-16e8-4cda-afb0-46f577a6b53d/volumes" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.859933 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzwnq"] Jan 28 17:38:13 crc kubenswrapper[4903]: E0128 17:38:13.860813 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="registry-server" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.860834 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="registry-server" Jan 28 17:38:13 crc kubenswrapper[4903]: E0128 17:38:13.860868 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="extract-content" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.860877 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="extract-content" Jan 28 17:38:13 crc kubenswrapper[4903]: E0128 17:38:13.860897 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="extract-utilities" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.860905 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="extract-utilities" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.861123 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a701b8-16e8-4cda-afb0-46f577a6b53d" containerName="registry-server" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.862768 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.870706 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzwnq"] Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.942559 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7dv\" (UniqueName: \"kubernetes.io/projected/8bec5572-7f17-4525-80a9-4a879eb01e58-kube-api-access-xk7dv\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.942659 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-utilities\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:13 crc kubenswrapper[4903]: I0128 17:38:13.942751 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-catalog-content\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.058161 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7dv\" (UniqueName: \"kubernetes.io/projected/8bec5572-7f17-4525-80a9-4a879eb01e58-kube-api-access-xk7dv\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.058269 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-utilities\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.058378 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-catalog-content\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.059134 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-utilities\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.059224 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-catalog-content\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.080582 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7dv\" (UniqueName: \"kubernetes.io/projected/8bec5572-7f17-4525-80a9-4a879eb01e58-kube-api-access-xk7dv\") pod \"redhat-operators-fzwnq\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.195477 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.477783 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fd8sk"] Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.480854 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.497247 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fd8sk"] Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.675395 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-utilities\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.675767 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gm4\" (UniqueName: \"kubernetes.io/projected/74aa9a26-5824-4f2e-a5be-a5f129322104-kube-api-access-f2gm4\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.676051 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-catalog-content\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.717091 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzwnq"] Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.777991 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gm4\" (UniqueName: \"kubernetes.io/projected/74aa9a26-5824-4f2e-a5be-a5f129322104-kube-api-access-f2gm4\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.778149 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-catalog-content\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.778219 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-utilities\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.778687 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-utilities\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.778728 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-catalog-content\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.806870 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gm4\" (UniqueName: \"kubernetes.io/projected/74aa9a26-5824-4f2e-a5be-a5f129322104-kube-api-access-f2gm4\") pod \"certified-operators-fd8sk\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:14 crc kubenswrapper[4903]: I0128 17:38:14.812756 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:15 crc kubenswrapper[4903]: I0128 17:38:15.383875 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fd8sk"] Jan 28 17:38:15 crc kubenswrapper[4903]: I0128 17:38:15.614360 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd8sk" event={"ID":"74aa9a26-5824-4f2e-a5be-a5f129322104","Type":"ContainerStarted","Data":"94202215cf7bd55a05483e1a5fbd636a865d71a582d7979331ef345f9edd1c81"} Jan 28 17:38:15 crc kubenswrapper[4903]: I0128 17:38:15.636629 4903 generic.go:334] "Generic (PLEG): container finished" podID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerID="da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b" exitCode=0 Jan 28 17:38:15 crc kubenswrapper[4903]: I0128 17:38:15.636686 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzwnq" event={"ID":"8bec5572-7f17-4525-80a9-4a879eb01e58","Type":"ContainerDied","Data":"da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b"} Jan 28 17:38:15 crc kubenswrapper[4903]: I0128 17:38:15.636719 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzwnq" event={"ID":"8bec5572-7f17-4525-80a9-4a879eb01e58","Type":"ContainerStarted","Data":"b5c35354fb08f4b4ae51b2bbc3be037e4a5e154463197decad47772a564eb8a1"} Jan 28 17:38:15 crc kubenswrapper[4903]: E0128 17:38:15.858658 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74aa9a26_5824_4f2e_a5be_a5f129322104.slice/crio-conmon-202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408.scope\": RecentStats: unable to find data in memory cache]" Jan 28 17:38:16 crc kubenswrapper[4903]: I0128 17:38:16.646564 4903 generic.go:334] "Generic (PLEG): container finished" podID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerID="202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408" exitCode=0 Jan 28 17:38:16 crc kubenswrapper[4903]: I0128 17:38:16.646652 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd8sk" event={"ID":"74aa9a26-5824-4f2e-a5be-a5f129322104","Type":"ContainerDied","Data":"202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408"} Jan 28 17:38:17 crc kubenswrapper[4903]: I0128 17:38:17.659909 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd8sk" event={"ID":"74aa9a26-5824-4f2e-a5be-a5f129322104","Type":"ContainerStarted","Data":"2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4"} Jan 28 17:38:17 crc kubenswrapper[4903]: I0128 17:38:17.662321 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzwnq" event={"ID":"8bec5572-7f17-4525-80a9-4a879eb01e58","Type":"ContainerStarted","Data":"4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f"} Jan 28 17:38:21 crc kubenswrapper[4903]: I0128 17:38:21.706934 4903 generic.go:334] "Generic (PLEG): container finished" podID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerID="2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4" exitCode=0 Jan 28 17:38:21 crc kubenswrapper[4903]: I0128 17:38:21.707024 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd8sk" event={"ID":"74aa9a26-5824-4f2e-a5be-a5f129322104","Type":"ContainerDied","Data":"2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4"} Jan 28 17:38:23 crc kubenswrapper[4903]: I0128 17:38:23.727706 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd8sk" event={"ID":"74aa9a26-5824-4f2e-a5be-a5f129322104","Type":"ContainerStarted","Data":"0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418"} Jan 28 17:38:23 crc kubenswrapper[4903]: I0128 17:38:23.746809 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fd8sk" podStartSLOduration=3.8570106060000002 podStartE2EDuration="9.746785146s" podCreationTimestamp="2026-01-28 17:38:14 +0000 UTC" firstStartedPulling="2026-01-28 17:38:16.65104026 +0000 UTC m=+6768.927011771" lastFinishedPulling="2026-01-28 17:38:22.54081481 +0000 UTC m=+6774.816786311" observedRunningTime="2026-01-28 17:38:23.744408852 +0000 UTC m=+6776.020380373" watchObservedRunningTime="2026-01-28 17:38:23.746785146 +0000 UTC m=+6776.022756667" Jan 28 17:38:24 crc kubenswrapper[4903]: I0128 17:38:24.814420 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:24 crc kubenswrapper[4903]: I0128 17:38:24.814482 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:25 crc kubenswrapper[4903]: I0128 17:38:25.747010 4903 generic.go:334] "Generic (PLEG): container finished" podID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerID="4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f" exitCode=0 Jan 28 17:38:25 crc kubenswrapper[4903]: I0128 17:38:25.747061 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzwnq" event={"ID":"8bec5572-7f17-4525-80a9-4a879eb01e58","Type":"ContainerDied","Data":"4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f"} Jan 28 17:38:25 crc kubenswrapper[4903]: I0128 17:38:25.861099 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fd8sk" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="registry-server" probeResult="failure" output=< Jan 28 17:38:25 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:38:25 crc kubenswrapper[4903]: > Jan 28 17:38:26 crc kubenswrapper[4903]: I0128 17:38:26.759296 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzwnq" event={"ID":"8bec5572-7f17-4525-80a9-4a879eb01e58","Type":"ContainerStarted","Data":"2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684"} Jan 28 17:38:26 crc kubenswrapper[4903]: I0128 17:38:26.791705 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzwnq" podStartSLOduration=2.955604396 podStartE2EDuration="13.791681485s" podCreationTimestamp="2026-01-28 17:38:13 +0000 UTC" firstStartedPulling="2026-01-28 17:38:15.646285833 +0000 UTC m=+6767.922257344" lastFinishedPulling="2026-01-28 17:38:26.482362922 +0000 UTC m=+6778.758334433" observedRunningTime="2026-01-28 17:38:26.782244163 +0000 UTC m=+6779.058215694" watchObservedRunningTime="2026-01-28 17:38:26.791681485 +0000 UTC m=+6779.067653016" Jan 28 17:38:34 crc kubenswrapper[4903]: I0128 17:38:34.196317 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:34 crc kubenswrapper[4903]: I0128 17:38:34.196947 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:38:35 crc kubenswrapper[4903]: I0128 17:38:35.242385 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzwnq" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" probeResult="failure" output=< Jan 28 17:38:35 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:38:35 crc kubenswrapper[4903]: > Jan 28 17:38:35 crc kubenswrapper[4903]: I0128 17:38:35.864182 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fd8sk" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="registry-server" probeResult="failure" output=< Jan 28 17:38:35 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:38:35 crc kubenswrapper[4903]: > Jan 28 17:38:44 crc kubenswrapper[4903]: I0128 17:38:44.860562 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:44 crc kubenswrapper[4903]: I0128 17:38:44.911386 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:45 crc kubenswrapper[4903]: I0128 17:38:45.242371 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzwnq" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" probeResult="failure" output=< Jan 28 17:38:45 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:38:45 crc kubenswrapper[4903]: > Jan 28 17:38:45 crc kubenswrapper[4903]: I0128 17:38:45.663121 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fd8sk"] Jan 28 17:38:45 crc kubenswrapper[4903]: I0128 17:38:45.947249 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fd8sk" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="registry-server" containerID="cri-o://0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418" gracePeriod=2 Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.495668 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.512905 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-catalog-content\") pod \"74aa9a26-5824-4f2e-a5be-a5f129322104\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.513018 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2gm4\" (UniqueName: \"kubernetes.io/projected/74aa9a26-5824-4f2e-a5be-a5f129322104-kube-api-access-f2gm4\") pod \"74aa9a26-5824-4f2e-a5be-a5f129322104\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.521444 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74aa9a26-5824-4f2e-a5be-a5f129322104-kube-api-access-f2gm4" (OuterVolumeSpecName: "kube-api-access-f2gm4") pod "74aa9a26-5824-4f2e-a5be-a5f129322104" (UID: "74aa9a26-5824-4f2e-a5be-a5f129322104"). InnerVolumeSpecName "kube-api-access-f2gm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.570614 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74aa9a26-5824-4f2e-a5be-a5f129322104" (UID: "74aa9a26-5824-4f2e-a5be-a5f129322104"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.615446 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-utilities\") pod \"74aa9a26-5824-4f2e-a5be-a5f129322104\" (UID: \"74aa9a26-5824-4f2e-a5be-a5f129322104\") " Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.615869 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.615897 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2gm4\" (UniqueName: \"kubernetes.io/projected/74aa9a26-5824-4f2e-a5be-a5f129322104-kube-api-access-f2gm4\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.616172 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-utilities" (OuterVolumeSpecName: "utilities") pod "74aa9a26-5824-4f2e-a5be-a5f129322104" (UID: "74aa9a26-5824-4f2e-a5be-a5f129322104"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.717600 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74aa9a26-5824-4f2e-a5be-a5f129322104-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.959346 4903 generic.go:334] "Generic (PLEG): container finished" podID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerID="0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418" exitCode=0 Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.959394 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd8sk" event={"ID":"74aa9a26-5824-4f2e-a5be-a5f129322104","Type":"ContainerDied","Data":"0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418"} Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.959686 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fd8sk" event={"ID":"74aa9a26-5824-4f2e-a5be-a5f129322104","Type":"ContainerDied","Data":"94202215cf7bd55a05483e1a5fbd636a865d71a582d7979331ef345f9edd1c81"} Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.959440 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fd8sk" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.959725 4903 scope.go:117] "RemoveContainer" containerID="0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.995461 4903 scope.go:117] "RemoveContainer" containerID="2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4" Jan 28 17:38:46 crc kubenswrapper[4903]: I0128 17:38:46.997968 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fd8sk"] Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.011791 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fd8sk"] Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.012025 4903 scope.go:117] "RemoveContainer" containerID="202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408" Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.067969 4903 scope.go:117] "RemoveContainer" containerID="0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418" Jan 28 17:38:47 crc kubenswrapper[4903]: E0128 17:38:47.071678 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418\": container with ID starting with 0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418 not found: ID does not exist" containerID="0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418" Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.071721 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418"} err="failed to get container status \"0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418\": rpc error: code = NotFound desc = could not find container \"0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418\": container with ID starting with 0b77da0ebac17c19d5b53f3fed25661e38d914d5f96d05a0a6dafbb98f419418 not found: ID does not exist" Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.071747 4903 scope.go:117] "RemoveContainer" containerID="2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4" Jan 28 17:38:47 crc kubenswrapper[4903]: E0128 17:38:47.072055 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4\": container with ID starting with 2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4 not found: ID does not exist" containerID="2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4" Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.072077 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4"} err="failed to get container status \"2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4\": rpc error: code = NotFound desc = could not find container \"2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4\": container with ID starting with 2d561f7f378da4eef186fcb24b7aac6fab12c5516d92dfa3dd2e346da58c66a4 not found: ID does not exist" Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.072091 4903 scope.go:117] "RemoveContainer" containerID="202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408" Jan 28 17:38:47 crc kubenswrapper[4903]: E0128 17:38:47.072386 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408\": container with ID starting with 202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408 not found: ID does not exist" containerID="202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408" Jan 28 17:38:47 crc kubenswrapper[4903]: I0128 17:38:47.072402 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408"} err="failed to get container status \"202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408\": rpc error: code = NotFound desc = could not find container \"202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408\": container with ID starting with 202f1719d3ee21eee0d0a7c7ac709cce50865ac6294e9e959e888dca5636a408 not found: ID does not exist" Jan 28 17:38:48 crc kubenswrapper[4903]: I0128 17:38:48.425665 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" path="/var/lib/kubelet/pods/74aa9a26-5824-4f2e-a5be-a5f129322104/volumes" Jan 28 17:38:55 crc kubenswrapper[4903]: I0128 17:38:55.247886 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzwnq" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" probeResult="failure" output=< Jan 28 17:38:55 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:38:55 crc kubenswrapper[4903]: > Jan 28 17:39:05 crc kubenswrapper[4903]: I0128 17:39:05.245601 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fzwnq" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" probeResult="failure" output=< Jan 28 17:39:05 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:39:05 crc kubenswrapper[4903]: > Jan 28 17:39:14 crc kubenswrapper[4903]: I0128 17:39:14.258073 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:39:14 crc kubenswrapper[4903]: I0128 17:39:14.315343 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:39:14 crc kubenswrapper[4903]: I0128 17:39:14.500686 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzwnq"] Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.261937 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzwnq" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" containerID="cri-o://2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684" gracePeriod=2 Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.783863 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.895083 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-utilities\") pod \"8bec5572-7f17-4525-80a9-4a879eb01e58\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.895836 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-catalog-content\") pod \"8bec5572-7f17-4525-80a9-4a879eb01e58\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.896110 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk7dv\" (UniqueName: \"kubernetes.io/projected/8bec5572-7f17-4525-80a9-4a879eb01e58-kube-api-access-xk7dv\") pod \"8bec5572-7f17-4525-80a9-4a879eb01e58\" (UID: \"8bec5572-7f17-4525-80a9-4a879eb01e58\") " Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.896218 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-utilities" (OuterVolumeSpecName: "utilities") pod "8bec5572-7f17-4525-80a9-4a879eb01e58" (UID: "8bec5572-7f17-4525-80a9-4a879eb01e58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.897106 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:16 crc kubenswrapper[4903]: I0128 17:39:16.914840 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bec5572-7f17-4525-80a9-4a879eb01e58-kube-api-access-xk7dv" (OuterVolumeSpecName: "kube-api-access-xk7dv") pod "8bec5572-7f17-4525-80a9-4a879eb01e58" (UID: "8bec5572-7f17-4525-80a9-4a879eb01e58"). InnerVolumeSpecName "kube-api-access-xk7dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.000158 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk7dv\" (UniqueName: \"kubernetes.io/projected/8bec5572-7f17-4525-80a9-4a879eb01e58-kube-api-access-xk7dv\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.013699 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bec5572-7f17-4525-80a9-4a879eb01e58" (UID: "8bec5572-7f17-4525-80a9-4a879eb01e58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.102842 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bec5572-7f17-4525-80a9-4a879eb01e58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.275776 4903 generic.go:334] "Generic (PLEG): container finished" podID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerID="2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684" exitCode=0 Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.275823 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzwnq" event={"ID":"8bec5572-7f17-4525-80a9-4a879eb01e58","Type":"ContainerDied","Data":"2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684"} Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.275860 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzwnq" event={"ID":"8bec5572-7f17-4525-80a9-4a879eb01e58","Type":"ContainerDied","Data":"b5c35354fb08f4b4ae51b2bbc3be037e4a5e154463197decad47772a564eb8a1"} Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.275885 4903 scope.go:117] "RemoveContainer" containerID="2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.275906 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzwnq" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.295597 4903 scope.go:117] "RemoveContainer" containerID="4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.317651 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzwnq"] Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.327436 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzwnq"] Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.339833 4903 scope.go:117] "RemoveContainer" containerID="da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.379999 4903 scope.go:117] "RemoveContainer" containerID="2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684" Jan 28 17:39:17 crc kubenswrapper[4903]: E0128 17:39:17.380667 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684\": container with ID starting with 2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684 not found: ID does not exist" containerID="2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.380707 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684"} err="failed to get container status \"2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684\": rpc error: code = NotFound desc = could not find container \"2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684\": container with ID starting with 2d3dabd82b44862a09f9ae601d5ee89fe54b9400368b84b1f5155768da05a684 not found: ID does not exist" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.380733 4903 scope.go:117] "RemoveContainer" containerID="4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f" Jan 28 17:39:17 crc kubenswrapper[4903]: E0128 17:39:17.381065 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f\": container with ID starting with 4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f not found: ID does not exist" containerID="4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.381123 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f"} err="failed to get container status \"4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f\": rpc error: code = NotFound desc = could not find container \"4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f\": container with ID starting with 4f17fdc1c3d92107a02abf38d4f6451525e0c063d674c2ee11f4e8860a85a17f not found: ID does not exist" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.381159 4903 scope.go:117] "RemoveContainer" containerID="da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b" Jan 28 17:39:17 crc kubenswrapper[4903]: E0128 17:39:17.381568 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b\": container with ID starting with da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b not found: ID does not exist" containerID="da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b" Jan 28 17:39:17 crc kubenswrapper[4903]: I0128 17:39:17.381603 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b"} err="failed to get container status \"da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b\": rpc error: code = NotFound desc = could not find container \"da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b\": container with ID starting with da1c5da5140d40951622d4f723203c9d74523861e3ab545378c85afdb36fd65b not found: ID does not exist" Jan 28 17:39:17 crc kubenswrapper[4903]: E0128 17:39:17.540756 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bec5572_7f17_4525_80a9_4a879eb01e58.slice/crio-b5c35354fb08f4b4ae51b2bbc3be037e4a5e154463197decad47772a564eb8a1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bec5572_7f17_4525_80a9_4a879eb01e58.slice\": RecentStats: unable to find data in memory cache]" Jan 28 17:39:18 crc kubenswrapper[4903]: I0128 17:39:18.426906 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" path="/var/lib/kubelet/pods/8bec5572-7f17-4525-80a9-4a879eb01e58/volumes" Jan 28 17:39:56 crc kubenswrapper[4903]: I0128 17:39:56.614051 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:39:56 crc kubenswrapper[4903]: I0128 17:39:56.614779 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:39:59 crc kubenswrapper[4903]: I0128 17:39:59.539409 4903 scope.go:117] "RemoveContainer" containerID="1f9147da7b7d4a4b2f70ea563917512657415fe25368612e72f11d409e6682d5" Jan 28 17:40:26 crc kubenswrapper[4903]: I0128 17:40:26.613432 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:40:26 crc kubenswrapper[4903]: I0128 17:40:26.614202 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:40:56 crc kubenswrapper[4903]: I0128 17:40:56.614026 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:40:56 crc kubenswrapper[4903]: I0128 17:40:56.614667 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:40:56 crc kubenswrapper[4903]: I0128 17:40:56.614715 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:40:56 crc kubenswrapper[4903]: I0128 17:40:56.615563 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7af3753e515febbe71c10e97c6ff6f7c034bc5e56b78fb1a144c96585e23f42"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:40:56 crc kubenswrapper[4903]: I0128 17:40:56.615616 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://d7af3753e515febbe71c10e97c6ff6f7c034bc5e56b78fb1a144c96585e23f42" gracePeriod=600 Jan 28 17:40:57 crc kubenswrapper[4903]: I0128 17:40:57.668319 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="d7af3753e515febbe71c10e97c6ff6f7c034bc5e56b78fb1a144c96585e23f42" exitCode=0 Jan 28 17:40:57 crc kubenswrapper[4903]: I0128 17:40:57.668444 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"d7af3753e515febbe71c10e97c6ff6f7c034bc5e56b78fb1a144c96585e23f42"} Jan 28 17:40:57 crc kubenswrapper[4903]: I0128 17:40:57.669308 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65"} Jan 28 17:40:57 crc kubenswrapper[4903]: I0128 17:40:57.669360 4903 scope.go:117] "RemoveContainer" containerID="25ad53d690a116e4db284e7c5bc7181744668e7451f12b52e710a67fa255b98f" Jan 28 17:40:58 crc kubenswrapper[4903]: I0128 17:40:58.056090 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-85a4-account-create-update-lgq85"] Jan 28 17:40:58 crc kubenswrapper[4903]: I0128 17:40:58.069605 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-l2shz"] Jan 28 17:40:58 crc kubenswrapper[4903]: I0128 17:40:58.080135 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-85a4-account-create-update-lgq85"] Jan 28 17:40:58 crc kubenswrapper[4903]: I0128 17:40:58.091317 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-l2shz"] Jan 28 17:40:58 crc kubenswrapper[4903]: I0128 17:40:58.425577 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a309e160-d43c-4d46-b4b1-77e53a64e845" path="/var/lib/kubelet/pods/a309e160-d43c-4d46-b4b1-77e53a64e845/volumes" Jan 28 17:40:58 crc kubenswrapper[4903]: I0128 17:40:58.426396 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24798df-5487-4f50-8a20-8c1890f588ed" path="/var/lib/kubelet/pods/e24798df-5487-4f50-8a20-8c1890f588ed/volumes" Jan 28 17:40:59 crc kubenswrapper[4903]: I0128 17:40:59.628841 4903 scope.go:117] "RemoveContainer" containerID="0df13b8f96907041432d758342af1cc3472c1c47539664c799ea6d0dcef496b5" Jan 28 17:40:59 crc kubenswrapper[4903]: I0128 17:40:59.678366 4903 scope.go:117] "RemoveContainer" containerID="f373d7d7a531285bca9707fadf60c03dce46b02197924ad42ddec3726e309b5d" Jan 28 17:41:14 crc kubenswrapper[4903]: I0128 17:41:14.031661 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-ndrvj"] Jan 28 17:41:14 crc kubenswrapper[4903]: I0128 17:41:14.040776 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-ndrvj"] Jan 28 17:41:14 crc kubenswrapper[4903]: I0128 17:41:14.424292 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2" path="/var/lib/kubelet/pods/77fe1f32-87e8-45f0-ba9d-b88b2cc62ad2/volumes" Jan 28 17:41:59 crc kubenswrapper[4903]: I0128 17:41:59.816619 4903 scope.go:117] "RemoveContainer" containerID="9cd945a6a2689175914f96b46101de957c69f45dfbd7651bc8eb024e0bc09b47" Jan 28 17:42:59 crc kubenswrapper[4903]: I0128 17:42:59.900953 4903 scope.go:117] "RemoveContainer" containerID="cb471616ca5ce78b8635a055ad7be40a005ac8e102742e8ab964e50009c31682" Jan 28 17:42:59 crc kubenswrapper[4903]: I0128 17:42:59.930156 4903 scope.go:117] "RemoveContainer" containerID="5e0236a69805736669ba4f0aaa186e9a315df4a5f7cfc508890aac282c4ed711" Jan 28 17:42:59 crc kubenswrapper[4903]: I0128 17:42:59.999516 4903 scope.go:117] "RemoveContainer" containerID="c99d79bf19b7e9200297cb16b37716db2d9f17efda1b2396966e4a1e111748c8" Jan 28 17:43:26 crc kubenswrapper[4903]: I0128 17:43:26.613381 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:43:26 crc kubenswrapper[4903]: I0128 17:43:26.613877 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:43:56 crc kubenswrapper[4903]: I0128 17:43:56.614418 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:43:56 crc kubenswrapper[4903]: I0128 17:43:56.615045 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:44:16 crc kubenswrapper[4903]: I0128 17:44:16.048613 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-pzbhp"] Jan 28 17:44:16 crc kubenswrapper[4903]: I0128 17:44:16.062004 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-a817-account-create-update-ztcqh"] Jan 28 17:44:16 crc kubenswrapper[4903]: I0128 17:44:16.071653 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-a817-account-create-update-ztcqh"] Jan 28 17:44:16 crc kubenswrapper[4903]: I0128 17:44:16.078860 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-pzbhp"] Jan 28 17:44:16 crc kubenswrapper[4903]: I0128 17:44:16.427270 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af3e35a-a105-4812-9f41-c49343319188" path="/var/lib/kubelet/pods/3af3e35a-a105-4812-9f41-c49343319188/volumes" Jan 28 17:44:16 crc kubenswrapper[4903]: I0128 17:44:16.428262 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4658f79-2284-4761-b715-0e0af88f2439" path="/var/lib/kubelet/pods/a4658f79-2284-4761-b715-0e0af88f2439/volumes" Jan 28 17:44:26 crc kubenswrapper[4903]: I0128 17:44:26.613821 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:44:26 crc kubenswrapper[4903]: I0128 17:44:26.614313 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:44:26 crc kubenswrapper[4903]: I0128 17:44:26.614358 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:44:26 crc kubenswrapper[4903]: I0128 17:44:26.615153 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:44:26 crc kubenswrapper[4903]: I0128 17:44:26.615198 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" gracePeriod=600 Jan 28 17:44:26 crc kubenswrapper[4903]: E0128 17:44:26.733790 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:44:27 crc kubenswrapper[4903]: I0128 17:44:27.689741 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" exitCode=0 Jan 28 17:44:27 crc kubenswrapper[4903]: I0128 17:44:27.689800 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65"} Jan 28 17:44:27 crc kubenswrapper[4903]: I0128 17:44:27.689855 4903 scope.go:117] "RemoveContainer" containerID="d7af3753e515febbe71c10e97c6ff6f7c034bc5e56b78fb1a144c96585e23f42" Jan 28 17:44:27 crc kubenswrapper[4903]: I0128 17:44:27.690645 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:44:27 crc kubenswrapper[4903]: E0128 17:44:27.691027 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:44:28 crc kubenswrapper[4903]: I0128 17:44:28.053122 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-chqvt"] Jan 28 17:44:28 crc kubenswrapper[4903]: I0128 17:44:28.064085 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-chqvt"] Jan 28 17:44:28 crc kubenswrapper[4903]: I0128 17:44:28.432937 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb25902-814a-41c3-b37d-827e3f4e2e93" path="/var/lib/kubelet/pods/1fb25902-814a-41c3-b37d-827e3f4e2e93/volumes" Jan 28 17:44:41 crc kubenswrapper[4903]: I0128 17:44:41.413777 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:44:41 crc kubenswrapper[4903]: E0128 17:44:41.414501 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:44:52 crc kubenswrapper[4903]: I0128 17:44:52.413606 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:44:52 crc kubenswrapper[4903]: E0128 17:44:52.414392 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.090320 4903 scope.go:117] "RemoveContainer" containerID="dfdd4ee0a64e2f12c19cef7560daf8f22096487bad6e9bb5efa21a49d32923b2" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.112706 4903 scope.go:117] "RemoveContainer" containerID="e66e2213492e8778a381661496e6aa4d3f2b04373813ae42b34899ae580175ee" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.160562 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f"] Jan 28 17:45:00 crc kubenswrapper[4903]: E0128 17:45:00.161080 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="extract-utilities" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.161105 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="extract-utilities" Jan 28 17:45:00 crc kubenswrapper[4903]: E0128 17:45:00.161132 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="extract-content" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.161140 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="extract-content" Jan 28 17:45:00 crc kubenswrapper[4903]: E0128 17:45:00.161159 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="registry-server" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.161169 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="registry-server" Jan 28 17:45:00 crc kubenswrapper[4903]: E0128 17:45:00.161185 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="extract-content" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.161192 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="extract-content" Jan 28 17:45:00 crc kubenswrapper[4903]: E0128 17:45:00.161202 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.161211 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" Jan 28 17:45:00 crc kubenswrapper[4903]: E0128 17:45:00.161227 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="extract-utilities" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.161236 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="extract-utilities" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.162254 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bec5572-7f17-4525-80a9-4a879eb01e58" containerName="registry-server" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.162277 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="74aa9a26-5824-4f2e-a5be-a5f129322104" containerName="registry-server" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.163173 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.168857 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ce95c83-ed4d-4e48-a31f-44e1b730e923-config-volume\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.168996 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ce95c83-ed4d-4e48-a31f-44e1b730e923-secret-volume\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.169055 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8rv\" (UniqueName: \"kubernetes.io/projected/1ce95c83-ed4d-4e48-a31f-44e1b730e923-kube-api-access-4n8rv\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.170335 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.170609 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.173250 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f"] Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.215478 4903 scope.go:117] "RemoveContainer" containerID="8f363c2379a3abd90a0379ed8a346e41b12df7d3790a0e192c7a5cb1c13dc5d7" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.272117 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n8rv\" (UniqueName: \"kubernetes.io/projected/1ce95c83-ed4d-4e48-a31f-44e1b730e923-kube-api-access-4n8rv\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.272299 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ce95c83-ed4d-4e48-a31f-44e1b730e923-config-volume\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.272407 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ce95c83-ed4d-4e48-a31f-44e1b730e923-secret-volume\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.273681 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ce95c83-ed4d-4e48-a31f-44e1b730e923-config-volume\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.278911 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ce95c83-ed4d-4e48-a31f-44e1b730e923-secret-volume\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.292008 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n8rv\" (UniqueName: \"kubernetes.io/projected/1ce95c83-ed4d-4e48-a31f-44e1b730e923-kube-api-access-4n8rv\") pod \"collect-profiles-29493705-75j8f\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.356375 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.824469 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f"] Jan 28 17:45:00 crc kubenswrapper[4903]: I0128 17:45:00.997370 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" event={"ID":"1ce95c83-ed4d-4e48-a31f-44e1b730e923","Type":"ContainerStarted","Data":"20d2dd387b799be3ab29370ab47ffccb35260a1e59edf6d64b6a804ca9d20ec0"} Jan 28 17:45:02 crc kubenswrapper[4903]: I0128 17:45:02.008872 4903 generic.go:334] "Generic (PLEG): container finished" podID="1ce95c83-ed4d-4e48-a31f-44e1b730e923" containerID="29a8a7970a454aaa477792b4487d625a3ab335c79638a767dd7a4888c535d7a1" exitCode=0 Jan 28 17:45:02 crc kubenswrapper[4903]: I0128 17:45:02.009031 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" event={"ID":"1ce95c83-ed4d-4e48-a31f-44e1b730e923","Type":"ContainerDied","Data":"29a8a7970a454aaa477792b4487d625a3ab335c79638a767dd7a4888c535d7a1"} Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.517061 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.559541 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ce95c83-ed4d-4e48-a31f-44e1b730e923-secret-volume\") pod \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.559612 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ce95c83-ed4d-4e48-a31f-44e1b730e923-config-volume\") pod \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.559707 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n8rv\" (UniqueName: \"kubernetes.io/projected/1ce95c83-ed4d-4e48-a31f-44e1b730e923-kube-api-access-4n8rv\") pod \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\" (UID: \"1ce95c83-ed4d-4e48-a31f-44e1b730e923\") " Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.561099 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce95c83-ed4d-4e48-a31f-44e1b730e923-config-volume" (OuterVolumeSpecName: "config-volume") pod "1ce95c83-ed4d-4e48-a31f-44e1b730e923" (UID: "1ce95c83-ed4d-4e48-a31f-44e1b730e923"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.574965 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce95c83-ed4d-4e48-a31f-44e1b730e923-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1ce95c83-ed4d-4e48-a31f-44e1b730e923" (UID: "1ce95c83-ed4d-4e48-a31f-44e1b730e923"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.576851 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce95c83-ed4d-4e48-a31f-44e1b730e923-kube-api-access-4n8rv" (OuterVolumeSpecName: "kube-api-access-4n8rv") pod "1ce95c83-ed4d-4e48-a31f-44e1b730e923" (UID: "1ce95c83-ed4d-4e48-a31f-44e1b730e923"). InnerVolumeSpecName "kube-api-access-4n8rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.662893 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ce95c83-ed4d-4e48-a31f-44e1b730e923-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.662926 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ce95c83-ed4d-4e48-a31f-44e1b730e923-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:03 crc kubenswrapper[4903]: I0128 17:45:03.662955 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n8rv\" (UniqueName: \"kubernetes.io/projected/1ce95c83-ed4d-4e48-a31f-44e1b730e923-kube-api-access-4n8rv\") on node \"crc\" DevicePath \"\"" Jan 28 17:45:04 crc kubenswrapper[4903]: I0128 17:45:04.026368 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" event={"ID":"1ce95c83-ed4d-4e48-a31f-44e1b730e923","Type":"ContainerDied","Data":"20d2dd387b799be3ab29370ab47ffccb35260a1e59edf6d64b6a804ca9d20ec0"} Jan 28 17:45:04 crc kubenswrapper[4903]: I0128 17:45:04.026411 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d2dd387b799be3ab29370ab47ffccb35260a1e59edf6d64b6a804ca9d20ec0" Jan 28 17:45:04 crc kubenswrapper[4903]: I0128 17:45:04.026482 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493705-75j8f" Jan 28 17:45:04 crc kubenswrapper[4903]: I0128 17:45:04.609737 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg"] Jan 28 17:45:04 crc kubenswrapper[4903]: I0128 17:45:04.620203 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493660-g87gg"] Jan 28 17:45:06 crc kubenswrapper[4903]: I0128 17:45:06.424519 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900499d1-401f-47f7-8646-e86b1edcaece" path="/var/lib/kubelet/pods/900499d1-401f-47f7-8646-e86b1edcaece/volumes" Jan 28 17:45:07 crc kubenswrapper[4903]: I0128 17:45:07.417824 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:45:07 crc kubenswrapper[4903]: E0128 17:45:07.424662 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:45:19 crc kubenswrapper[4903]: I0128 17:45:19.413490 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:45:19 crc kubenswrapper[4903]: E0128 17:45:19.414314 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:45:34 crc kubenswrapper[4903]: I0128 17:45:34.413400 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:45:34 crc kubenswrapper[4903]: E0128 17:45:34.414291 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:45:45 crc kubenswrapper[4903]: I0128 17:45:45.413409 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:45:45 crc kubenswrapper[4903]: E0128 17:45:45.414227 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:46:00 crc kubenswrapper[4903]: I0128 17:46:00.349093 4903 scope.go:117] "RemoveContainer" containerID="b8788feddf94c8f2d1c2d6fbdd25bb373ddf45c088b0502ee07b79b4152f37ec" Jan 28 17:46:00 crc kubenswrapper[4903]: I0128 17:46:00.413656 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:46:00 crc kubenswrapper[4903]: E0128 17:46:00.413976 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:46:15 crc kubenswrapper[4903]: I0128 17:46:15.413691 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:46:15 crc kubenswrapper[4903]: E0128 17:46:15.414610 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:46:26 crc kubenswrapper[4903]: I0128 17:46:26.413839 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:46:26 crc kubenswrapper[4903]: E0128 17:46:26.414666 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:46:38 crc kubenswrapper[4903]: I0128 17:46:38.427316 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:46:38 crc kubenswrapper[4903]: E0128 17:46:38.428577 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:46:49 crc kubenswrapper[4903]: I0128 17:46:49.413450 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:46:49 crc kubenswrapper[4903]: E0128 17:46:49.414532 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:47:04 crc kubenswrapper[4903]: I0128 17:47:04.414128 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:47:04 crc kubenswrapper[4903]: E0128 17:47:04.415079 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:47:17 crc kubenswrapper[4903]: I0128 17:47:17.413788 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:47:17 crc kubenswrapper[4903]: E0128 17:47:17.414497 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.346228 4903 generic.go:334] "Generic (PLEG): container finished" podID="1058a0d0-b6fe-458c-95f6-ab19e47c2043" containerID="55ad7b7905c47050b5777f185aef537948745ca7847376dfc75dd0d88f5f47b9" exitCode=0 Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.346311 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" event={"ID":"1058a0d0-b6fe-458c-95f6-ab19e47c2043","Type":"ContainerDied","Data":"55ad7b7905c47050b5777f185aef537948745ca7847376dfc75dd0d88f5f47b9"} Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.736590 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgk9c"] Jan 28 17:47:27 crc kubenswrapper[4903]: E0128 17:47:27.737187 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce95c83-ed4d-4e48-a31f-44e1b730e923" containerName="collect-profiles" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.737211 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce95c83-ed4d-4e48-a31f-44e1b730e923" containerName="collect-profiles" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.737492 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce95c83-ed4d-4e48-a31f-44e1b730e923" containerName="collect-profiles" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.739556 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.747754 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgk9c"] Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.867412 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-utilities\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.867959 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/3f0724bd-adbd-49be-9a87-fe4712dd15df-kube-api-access-kg64d\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.868244 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-catalog-content\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.970557 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-utilities\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.970649 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/3f0724bd-adbd-49be-9a87-fe4712dd15df-kube-api-access-kg64d\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.970742 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-catalog-content\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.971215 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-catalog-content\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.971216 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-utilities\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:27 crc kubenswrapper[4903]: I0128 17:47:27.995564 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/3f0724bd-adbd-49be-9a87-fe4712dd15df-kube-api-access-kg64d\") pod \"redhat-marketplace-fgk9c\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.065536 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.570173 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgk9c"] Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.773567 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.896790 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-ssh-key-openstack-cell1\") pod \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.897049 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wbx\" (UniqueName: \"kubernetes.io/projected/1058a0d0-b6fe-458c-95f6-ab19e47c2043-kube-api-access-s7wbx\") pod \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.897108 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-inventory\") pod \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.897167 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-tripleo-cleanup-combined-ca-bundle\") pod \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\" (UID: \"1058a0d0-b6fe-458c-95f6-ab19e47c2043\") " Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.903009 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "1058a0d0-b6fe-458c-95f6-ab19e47c2043" (UID: "1058a0d0-b6fe-458c-95f6-ab19e47c2043"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.903800 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1058a0d0-b6fe-458c-95f6-ab19e47c2043-kube-api-access-s7wbx" (OuterVolumeSpecName: "kube-api-access-s7wbx") pod "1058a0d0-b6fe-458c-95f6-ab19e47c2043" (UID: "1058a0d0-b6fe-458c-95f6-ab19e47c2043"). InnerVolumeSpecName "kube-api-access-s7wbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.929565 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1058a0d0-b6fe-458c-95f6-ab19e47c2043" (UID: "1058a0d0-b6fe-458c-95f6-ab19e47c2043"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:28 crc kubenswrapper[4903]: I0128 17:47:28.932085 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-inventory" (OuterVolumeSpecName: "inventory") pod "1058a0d0-b6fe-458c-95f6-ab19e47c2043" (UID: "1058a0d0-b6fe-458c-95f6-ab19e47c2043"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.002438 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7wbx\" (UniqueName: \"kubernetes.io/projected/1058a0d0-b6fe-458c-95f6-ab19e47c2043-kube-api-access-s7wbx\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.002550 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.002567 4903 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.002580 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1058a0d0-b6fe-458c-95f6-ab19e47c2043-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.366269 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" event={"ID":"1058a0d0-b6fe-458c-95f6-ab19e47c2043","Type":"ContainerDied","Data":"057dfff82b590a44ba5da692181e33a69f4cac5c500224592b1216a4c999b450"} Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.366644 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057dfff82b590a44ba5da692181e33a69f4cac5c500224592b1216a4c999b450" Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.366292 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-7wg2z" Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.367778 4903 generic.go:334] "Generic (PLEG): container finished" podID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerID="5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889" exitCode=0 Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.367816 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgk9c" event={"ID":"3f0724bd-adbd-49be-9a87-fe4712dd15df","Type":"ContainerDied","Data":"5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889"} Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.367839 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgk9c" event={"ID":"3f0724bd-adbd-49be-9a87-fe4712dd15df","Type":"ContainerStarted","Data":"508bc71af6c9b0745e61c95ccf5213ef7bcdb4550489da59b249b7001cb2d85d"} Jan 28 17:47:29 crc kubenswrapper[4903]: I0128 17:47:29.370446 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:47:31 crc kubenswrapper[4903]: I0128 17:47:31.386969 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgk9c" event={"ID":"3f0724bd-adbd-49be-9a87-fe4712dd15df","Type":"ContainerStarted","Data":"5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6"} Jan 28 17:47:31 crc kubenswrapper[4903]: I0128 17:47:31.414648 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:47:31 crc kubenswrapper[4903]: E0128 17:47:31.414945 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:47:32 crc kubenswrapper[4903]: I0128 17:47:32.399418 4903 generic.go:334] "Generic (PLEG): container finished" podID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerID="5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6" exitCode=0 Jan 28 17:47:32 crc kubenswrapper[4903]: I0128 17:47:32.399549 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgk9c" event={"ID":"3f0724bd-adbd-49be-9a87-fe4712dd15df","Type":"ContainerDied","Data":"5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6"} Jan 28 17:47:33 crc kubenswrapper[4903]: I0128 17:47:33.411259 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgk9c" event={"ID":"3f0724bd-adbd-49be-9a87-fe4712dd15df","Type":"ContainerStarted","Data":"ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3"} Jan 28 17:47:33 crc kubenswrapper[4903]: I0128 17:47:33.443009 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgk9c" podStartSLOduration=2.833164803 podStartE2EDuration="6.442980784s" podCreationTimestamp="2026-01-28 17:47:27 +0000 UTC" firstStartedPulling="2026-01-28 17:47:29.370168908 +0000 UTC m=+7321.646140419" lastFinishedPulling="2026-01-28 17:47:32.979984889 +0000 UTC m=+7325.255956400" observedRunningTime="2026-01-28 17:47:33.436167587 +0000 UTC m=+7325.712139098" watchObservedRunningTime="2026-01-28 17:47:33.442980784 +0000 UTC m=+7325.718952295" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.577625 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-psvpm"] Jan 28 17:47:35 crc kubenswrapper[4903]: E0128 17:47:35.578046 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1058a0d0-b6fe-458c-95f6-ab19e47c2043" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.578060 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1058a0d0-b6fe-458c-95f6-ab19e47c2043" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.578282 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1058a0d0-b6fe-458c-95f6-ab19e47c2043" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.579098 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.581485 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.581837 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.581931 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.582789 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.590748 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-psvpm"] Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.758881 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.758928 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6wjt\" (UniqueName: \"kubernetes.io/projected/4b970ca2-2eb3-43db-a58e-624d275ecf17-kube-api-access-r6wjt\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.759024 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-inventory\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.759099 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.862947 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.864227 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6wjt\" (UniqueName: \"kubernetes.io/projected/4b970ca2-2eb3-43db-a58e-624d275ecf17-kube-api-access-r6wjt\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.864436 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-inventory\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.864551 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.871458 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-inventory\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.872717 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.877344 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.892216 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6wjt\" (UniqueName: \"kubernetes.io/projected/4b970ca2-2eb3-43db-a58e-624d275ecf17-kube-api-access-r6wjt\") pod \"bootstrap-openstack-openstack-cell1-psvpm\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:35 crc kubenswrapper[4903]: I0128 17:47:35.898446 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:47:36 crc kubenswrapper[4903]: I0128 17:47:36.662774 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-psvpm"] Jan 28 17:47:37 crc kubenswrapper[4903]: I0128 17:47:37.478888 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" event={"ID":"4b970ca2-2eb3-43db-a58e-624d275ecf17","Type":"ContainerStarted","Data":"a918577c6009a4d346344f3d6ff78fc3aa2080c50e2ce80d7e501fb561949ec4"} Jan 28 17:47:38 crc kubenswrapper[4903]: I0128 17:47:38.066174 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:38 crc kubenswrapper[4903]: I0128 17:47:38.066454 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:38 crc kubenswrapper[4903]: I0128 17:47:38.118123 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:38 crc kubenswrapper[4903]: I0128 17:47:38.488274 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" event={"ID":"4b970ca2-2eb3-43db-a58e-624d275ecf17","Type":"ContainerStarted","Data":"e81804d02eb5af352af58e1b28c7143c874ee750b003132533c94ef627602256"} Jan 28 17:47:38 crc kubenswrapper[4903]: I0128 17:47:38.510057 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" podStartSLOduration=2.788817208 podStartE2EDuration="3.510038556s" podCreationTimestamp="2026-01-28 17:47:35 +0000 UTC" firstStartedPulling="2026-01-28 17:47:36.681739905 +0000 UTC m=+7328.957711416" lastFinishedPulling="2026-01-28 17:47:37.402961263 +0000 UTC m=+7329.678932764" observedRunningTime="2026-01-28 17:47:38.50506831 +0000 UTC m=+7330.781039821" watchObservedRunningTime="2026-01-28 17:47:38.510038556 +0000 UTC m=+7330.786010067" Jan 28 17:47:38 crc kubenswrapper[4903]: I0128 17:47:38.536361 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:38 crc kubenswrapper[4903]: I0128 17:47:38.588085 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgk9c"] Jan 28 17:47:40 crc kubenswrapper[4903]: I0128 17:47:40.506034 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgk9c" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="registry-server" containerID="cri-o://ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3" gracePeriod=2 Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.018246 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.205063 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-utilities\") pod \"3f0724bd-adbd-49be-9a87-fe4712dd15df\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.205187 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-catalog-content\") pod \"3f0724bd-adbd-49be-9a87-fe4712dd15df\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.205400 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/3f0724bd-adbd-49be-9a87-fe4712dd15df-kube-api-access-kg64d\") pod \"3f0724bd-adbd-49be-9a87-fe4712dd15df\" (UID: \"3f0724bd-adbd-49be-9a87-fe4712dd15df\") " Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.206041 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-utilities" (OuterVolumeSpecName: "utilities") pod "3f0724bd-adbd-49be-9a87-fe4712dd15df" (UID: "3f0724bd-adbd-49be-9a87-fe4712dd15df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.212381 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0724bd-adbd-49be-9a87-fe4712dd15df-kube-api-access-kg64d" (OuterVolumeSpecName: "kube-api-access-kg64d") pod "3f0724bd-adbd-49be-9a87-fe4712dd15df" (UID: "3f0724bd-adbd-49be-9a87-fe4712dd15df"). InnerVolumeSpecName "kube-api-access-kg64d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.231828 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f0724bd-adbd-49be-9a87-fe4712dd15df" (UID: "3f0724bd-adbd-49be-9a87-fe4712dd15df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.308247 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg64d\" (UniqueName: \"kubernetes.io/projected/3f0724bd-adbd-49be-9a87-fe4712dd15df-kube-api-access-kg64d\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.308649 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.308661 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f0724bd-adbd-49be-9a87-fe4712dd15df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.517799 4903 generic.go:334] "Generic (PLEG): container finished" podID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerID="ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3" exitCode=0 Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.517853 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgk9c" event={"ID":"3f0724bd-adbd-49be-9a87-fe4712dd15df","Type":"ContainerDied","Data":"ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3"} Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.517881 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgk9c" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.517899 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgk9c" event={"ID":"3f0724bd-adbd-49be-9a87-fe4712dd15df","Type":"ContainerDied","Data":"508bc71af6c9b0745e61c95ccf5213ef7bcdb4550489da59b249b7001cb2d85d"} Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.517929 4903 scope.go:117] "RemoveContainer" containerID="ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.539216 4903 scope.go:117] "RemoveContainer" containerID="5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.586118 4903 scope.go:117] "RemoveContainer" containerID="5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.589227 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgk9c"] Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.633701 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgk9c"] Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.680700 4903 scope.go:117] "RemoveContainer" containerID="ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3" Jan 28 17:47:41 crc kubenswrapper[4903]: E0128 17:47:41.684636 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3\": container with ID starting with ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3 not found: ID does not exist" containerID="ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.684678 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3"} err="failed to get container status \"ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3\": rpc error: code = NotFound desc = could not find container \"ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3\": container with ID starting with ac41b7884f0255fe73ce98e976fe781066793b158970fb0bdbea5d3793b61cd3 not found: ID does not exist" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.684711 4903 scope.go:117] "RemoveContainer" containerID="5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6" Jan 28 17:47:41 crc kubenswrapper[4903]: E0128 17:47:41.688101 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6\": container with ID starting with 5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6 not found: ID does not exist" containerID="5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.688175 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6"} err="failed to get container status \"5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6\": rpc error: code = NotFound desc = could not find container \"5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6\": container with ID starting with 5879f702eb8403655fe25eef82435c2be5224e90a7afa4f75a9a1a7520743ba6 not found: ID does not exist" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.688202 4903 scope.go:117] "RemoveContainer" containerID="5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889" Jan 28 17:47:41 crc kubenswrapper[4903]: E0128 17:47:41.692657 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889\": container with ID starting with 5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889 not found: ID does not exist" containerID="5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889" Jan 28 17:47:41 crc kubenswrapper[4903]: I0128 17:47:41.692700 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889"} err="failed to get container status \"5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889\": rpc error: code = NotFound desc = could not find container \"5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889\": container with ID starting with 5cde73ad08b0a1083f3f9865ae8bd496fc760dce8a32e4cf9637fdcec9ab9889 not found: ID does not exist" Jan 28 17:47:42 crc kubenswrapper[4903]: I0128 17:47:42.426366 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" path="/var/lib/kubelet/pods/3f0724bd-adbd-49be-9a87-fe4712dd15df/volumes" Jan 28 17:47:43 crc kubenswrapper[4903]: I0128 17:47:43.414149 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:47:43 crc kubenswrapper[4903]: E0128 17:47:43.414730 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:47:58 crc kubenswrapper[4903]: I0128 17:47:58.421604 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:47:58 crc kubenswrapper[4903]: E0128 17:47:58.422485 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:48:11 crc kubenswrapper[4903]: I0128 17:48:11.416046 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:48:11 crc kubenswrapper[4903]: E0128 17:48:11.418037 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:48:24 crc kubenswrapper[4903]: I0128 17:48:24.413596 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:48:24 crc kubenswrapper[4903]: E0128 17:48:24.414368 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:48:35 crc kubenswrapper[4903]: I0128 17:48:35.414093 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:48:35 crc kubenswrapper[4903]: E0128 17:48:35.415045 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:48:46 crc kubenswrapper[4903]: I0128 17:48:46.413386 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:48:46 crc kubenswrapper[4903]: E0128 17:48:46.414270 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.332114 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pwst4"] Jan 28 17:48:53 crc kubenswrapper[4903]: E0128 17:48:53.332877 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="extract-content" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.332890 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="extract-content" Jan 28 17:48:53 crc kubenswrapper[4903]: E0128 17:48:53.332911 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="extract-utilities" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.332917 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="extract-utilities" Jan 28 17:48:53 crc kubenswrapper[4903]: E0128 17:48:53.332939 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="registry-server" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.332947 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="registry-server" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.333126 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0724bd-adbd-49be-9a87-fe4712dd15df" containerName="registry-server" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.334755 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.354062 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pwst4"] Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.371916 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-catalog-content\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.372031 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-utilities\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.372099 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4z5q\" (UniqueName: \"kubernetes.io/projected/e7ac2e97-5bf9-4595-a172-2a6c709937d0-kube-api-access-m4z5q\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.473324 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-utilities\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.473387 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4z5q\" (UniqueName: \"kubernetes.io/projected/e7ac2e97-5bf9-4595-a172-2a6c709937d0-kube-api-access-m4z5q\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.473600 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-catalog-content\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.473880 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-utilities\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.474022 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-catalog-content\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.499740 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4z5q\" (UniqueName: \"kubernetes.io/projected/e7ac2e97-5bf9-4595-a172-2a6c709937d0-kube-api-access-m4z5q\") pod \"community-operators-pwst4\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:53 crc kubenswrapper[4903]: I0128 17:48:53.661800 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:48:54 crc kubenswrapper[4903]: I0128 17:48:54.235348 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pwst4"] Jan 28 17:48:54 crc kubenswrapper[4903]: I0128 17:48:54.249446 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwst4" event={"ID":"e7ac2e97-5bf9-4595-a172-2a6c709937d0","Type":"ContainerStarted","Data":"e02643d8518679b5ae62eb1ed895ee98b07213888cbea2013329f252a0f1a3b3"} Jan 28 17:48:55 crc kubenswrapper[4903]: I0128 17:48:55.262024 4903 generic.go:334] "Generic (PLEG): container finished" podID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerID="69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9" exitCode=0 Jan 28 17:48:55 crc kubenswrapper[4903]: I0128 17:48:55.262133 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwst4" event={"ID":"e7ac2e97-5bf9-4595-a172-2a6c709937d0","Type":"ContainerDied","Data":"69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9"} Jan 28 17:48:57 crc kubenswrapper[4903]: I0128 17:48:57.286879 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwst4" event={"ID":"e7ac2e97-5bf9-4595-a172-2a6c709937d0","Type":"ContainerStarted","Data":"63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355"} Jan 28 17:48:57 crc kubenswrapper[4903]: I0128 17:48:57.919758 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ndwz9"] Jan 28 17:48:57 crc kubenswrapper[4903]: I0128 17:48:57.922325 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:57 crc kubenswrapper[4903]: I0128 17:48:57.935951 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndwz9"] Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.083662 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-utilities\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.084027 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49jmf\" (UniqueName: \"kubernetes.io/projected/4d0dca59-e30b-427e-9ac3-d5df0051235f-kube-api-access-49jmf\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.084150 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-catalog-content\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.185961 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-utilities\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.186074 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49jmf\" (UniqueName: \"kubernetes.io/projected/4d0dca59-e30b-427e-9ac3-d5df0051235f-kube-api-access-49jmf\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.186110 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-catalog-content\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.186614 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-catalog-content\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.186917 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-utilities\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.210745 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49jmf\" (UniqueName: \"kubernetes.io/projected/4d0dca59-e30b-427e-9ac3-d5df0051235f-kube-api-access-49jmf\") pod \"certified-operators-ndwz9\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.243512 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:48:58 crc kubenswrapper[4903]: I0128 17:48:58.794106 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ndwz9"] Jan 28 17:48:59 crc kubenswrapper[4903]: I0128 17:48:59.307018 4903 generic.go:334] "Generic (PLEG): container finished" podID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerID="63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355" exitCode=0 Jan 28 17:48:59 crc kubenswrapper[4903]: I0128 17:48:59.307137 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwst4" event={"ID":"e7ac2e97-5bf9-4595-a172-2a6c709937d0","Type":"ContainerDied","Data":"63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355"} Jan 28 17:48:59 crc kubenswrapper[4903]: I0128 17:48:59.309078 4903 generic.go:334] "Generic (PLEG): container finished" podID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerID="177d84d3c84d4307816aa016f2fa6467f99165f4b07f90b383f0783fe4076133" exitCode=0 Jan 28 17:48:59 crc kubenswrapper[4903]: I0128 17:48:59.309131 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndwz9" event={"ID":"4d0dca59-e30b-427e-9ac3-d5df0051235f","Type":"ContainerDied","Data":"177d84d3c84d4307816aa016f2fa6467f99165f4b07f90b383f0783fe4076133"} Jan 28 17:48:59 crc kubenswrapper[4903]: I0128 17:48:59.310679 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndwz9" event={"ID":"4d0dca59-e30b-427e-9ac3-d5df0051235f","Type":"ContainerStarted","Data":"6dd739a2b6b2b93d264234fae021801ab49801c57b3986fa730fd72e37afa9ab"} Jan 28 17:49:00 crc kubenswrapper[4903]: I0128 17:49:00.325136 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndwz9" event={"ID":"4d0dca59-e30b-427e-9ac3-d5df0051235f","Type":"ContainerStarted","Data":"796f38a85fc735e7f4599ca443b6c02c2ef2f32b3ab61735ddd764b506812176"} Jan 28 17:49:00 crc kubenswrapper[4903]: I0128 17:49:00.329411 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwst4" event={"ID":"e7ac2e97-5bf9-4595-a172-2a6c709937d0","Type":"ContainerStarted","Data":"9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a"} Jan 28 17:49:00 crc kubenswrapper[4903]: I0128 17:49:00.376244 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pwst4" podStartSLOduration=2.856172425 podStartE2EDuration="7.376225984s" podCreationTimestamp="2026-01-28 17:48:53 +0000 UTC" firstStartedPulling="2026-01-28 17:48:55.265279972 +0000 UTC m=+7407.541251483" lastFinishedPulling="2026-01-28 17:48:59.785333541 +0000 UTC m=+7412.061305042" observedRunningTime="2026-01-28 17:49:00.372742399 +0000 UTC m=+7412.648713920" watchObservedRunningTime="2026-01-28 17:49:00.376225984 +0000 UTC m=+7412.652197495" Jan 28 17:49:00 crc kubenswrapper[4903]: I0128 17:49:00.418908 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:49:00 crc kubenswrapper[4903]: E0128 17:49:00.419184 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:49:03 crc kubenswrapper[4903]: I0128 17:49:03.362836 4903 generic.go:334] "Generic (PLEG): container finished" podID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerID="796f38a85fc735e7f4599ca443b6c02c2ef2f32b3ab61735ddd764b506812176" exitCode=0 Jan 28 17:49:03 crc kubenswrapper[4903]: I0128 17:49:03.362935 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndwz9" event={"ID":"4d0dca59-e30b-427e-9ac3-d5df0051235f","Type":"ContainerDied","Data":"796f38a85fc735e7f4599ca443b6c02c2ef2f32b3ab61735ddd764b506812176"} Jan 28 17:49:03 crc kubenswrapper[4903]: I0128 17:49:03.662922 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:49:03 crc kubenswrapper[4903]: I0128 17:49:03.663232 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:49:04 crc kubenswrapper[4903]: I0128 17:49:04.379152 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndwz9" event={"ID":"4d0dca59-e30b-427e-9ac3-d5df0051235f","Type":"ContainerStarted","Data":"dd172f698044bbb537da7a8a6dc4faf031145ff78db923798baba747fb3d6b84"} Jan 28 17:49:04 crc kubenswrapper[4903]: I0128 17:49:04.406293 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ndwz9" podStartSLOduration=2.820068302 podStartE2EDuration="7.40627082s" podCreationTimestamp="2026-01-28 17:48:57 +0000 UTC" firstStartedPulling="2026-01-28 17:48:59.310648257 +0000 UTC m=+7411.586619768" lastFinishedPulling="2026-01-28 17:49:03.896850775 +0000 UTC m=+7416.172822286" observedRunningTime="2026-01-28 17:49:04.400598375 +0000 UTC m=+7416.676569916" watchObservedRunningTime="2026-01-28 17:49:04.40627082 +0000 UTC m=+7416.682242331" Jan 28 17:49:04 crc kubenswrapper[4903]: I0128 17:49:04.706728 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pwst4" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:04 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:04 crc kubenswrapper[4903]: > Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.436336 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ckhsv"] Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.440517 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.448226 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckhsv"] Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.548373 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-catalog-content\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.551557 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lv9m\" (UniqueName: \"kubernetes.io/projected/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-kube-api-access-9lv9m\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.551679 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-utilities\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.654420 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lv9m\" (UniqueName: \"kubernetes.io/projected/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-kube-api-access-9lv9m\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.654497 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-utilities\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.654552 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-catalog-content\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.655259 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-catalog-content\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.655481 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-utilities\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.681504 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lv9m\" (UniqueName: \"kubernetes.io/projected/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-kube-api-access-9lv9m\") pod \"redhat-operators-ckhsv\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:07 crc kubenswrapper[4903]: I0128 17:49:07.798144 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:08 crc kubenswrapper[4903]: I0128 17:49:08.244049 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:49:08 crc kubenswrapper[4903]: I0128 17:49:08.244113 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:49:08 crc kubenswrapper[4903]: I0128 17:49:08.328594 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckhsv"] Jan 28 17:49:08 crc kubenswrapper[4903]: I0128 17:49:08.426894 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckhsv" event={"ID":"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8","Type":"ContainerStarted","Data":"8eca76aaf49d1606cc3f40a36fd2698bd51a9f44d9c129bab01e002a864ca487"} Jan 28 17:49:09 crc kubenswrapper[4903]: I0128 17:49:09.326504 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ndwz9" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:09 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:09 crc kubenswrapper[4903]: > Jan 28 17:49:09 crc kubenswrapper[4903]: I0128 17:49:09.434903 4903 generic.go:334] "Generic (PLEG): container finished" podID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerID="cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d" exitCode=0 Jan 28 17:49:09 crc kubenswrapper[4903]: I0128 17:49:09.434967 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckhsv" event={"ID":"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8","Type":"ContainerDied","Data":"cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d"} Jan 28 17:49:10 crc kubenswrapper[4903]: I0128 17:49:10.455322 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckhsv" event={"ID":"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8","Type":"ContainerStarted","Data":"7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc"} Jan 28 17:49:14 crc kubenswrapper[4903]: I0128 17:49:14.706202 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pwst4" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:14 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:14 crc kubenswrapper[4903]: > Jan 28 17:49:15 crc kubenswrapper[4903]: I0128 17:49:15.413052 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:49:15 crc kubenswrapper[4903]: E0128 17:49:15.413617 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:49:19 crc kubenswrapper[4903]: I0128 17:49:19.292422 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ndwz9" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:19 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:19 crc kubenswrapper[4903]: > Jan 28 17:49:20 crc kubenswrapper[4903]: I0128 17:49:20.550967 4903 generic.go:334] "Generic (PLEG): container finished" podID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerID="7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc" exitCode=0 Jan 28 17:49:20 crc kubenswrapper[4903]: I0128 17:49:20.551061 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckhsv" event={"ID":"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8","Type":"ContainerDied","Data":"7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc"} Jan 28 17:49:21 crc kubenswrapper[4903]: I0128 17:49:21.564369 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckhsv" event={"ID":"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8","Type":"ContainerStarted","Data":"58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e"} Jan 28 17:49:21 crc kubenswrapper[4903]: I0128 17:49:21.594447 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ckhsv" podStartSLOduration=2.909372969 podStartE2EDuration="14.594429194s" podCreationTimestamp="2026-01-28 17:49:07 +0000 UTC" firstStartedPulling="2026-01-28 17:49:09.436794522 +0000 UTC m=+7421.712766033" lastFinishedPulling="2026-01-28 17:49:21.121850747 +0000 UTC m=+7433.397822258" observedRunningTime="2026-01-28 17:49:21.587241047 +0000 UTC m=+7433.863212578" watchObservedRunningTime="2026-01-28 17:49:21.594429194 +0000 UTC m=+7433.870400705" Jan 28 17:49:24 crc kubenswrapper[4903]: I0128 17:49:24.715936 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pwst4" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:24 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:24 crc kubenswrapper[4903]: > Jan 28 17:49:27 crc kubenswrapper[4903]: I0128 17:49:27.798597 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:27 crc kubenswrapper[4903]: I0128 17:49:27.799192 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:49:28 crc kubenswrapper[4903]: I0128 17:49:28.852351 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ckhsv" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:28 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:28 crc kubenswrapper[4903]: > Jan 28 17:49:29 crc kubenswrapper[4903]: I0128 17:49:29.297853 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ndwz9" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:29 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:29 crc kubenswrapper[4903]: > Jan 28 17:49:29 crc kubenswrapper[4903]: I0128 17:49:29.413681 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:49:30 crc kubenswrapper[4903]: I0128 17:49:30.683035 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"8e6802290420c6d59f256a6272c07630904cfdec2373baad19af691305312c46"} Jan 28 17:49:33 crc kubenswrapper[4903]: I0128 17:49:33.714414 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:49:33 crc kubenswrapper[4903]: I0128 17:49:33.768675 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:49:33 crc kubenswrapper[4903]: I0128 17:49:33.949583 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pwst4"] Jan 28 17:49:35 crc kubenswrapper[4903]: I0128 17:49:35.732947 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pwst4" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="registry-server" containerID="cri-o://9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a" gracePeriod=2 Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.426328 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.517557 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-utilities\") pod \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.517996 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-catalog-content\") pod \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.518069 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4z5q\" (UniqueName: \"kubernetes.io/projected/e7ac2e97-5bf9-4595-a172-2a6c709937d0-kube-api-access-m4z5q\") pod \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\" (UID: \"e7ac2e97-5bf9-4595-a172-2a6c709937d0\") " Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.518340 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-utilities" (OuterVolumeSpecName: "utilities") pod "e7ac2e97-5bf9-4595-a172-2a6c709937d0" (UID: "e7ac2e97-5bf9-4595-a172-2a6c709937d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.519464 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.524637 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ac2e97-5bf9-4595-a172-2a6c709937d0-kube-api-access-m4z5q" (OuterVolumeSpecName: "kube-api-access-m4z5q") pod "e7ac2e97-5bf9-4595-a172-2a6c709937d0" (UID: "e7ac2e97-5bf9-4595-a172-2a6c709937d0"). InnerVolumeSpecName "kube-api-access-m4z5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.588076 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7ac2e97-5bf9-4595-a172-2a6c709937d0" (UID: "e7ac2e97-5bf9-4595-a172-2a6c709937d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.622412 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ac2e97-5bf9-4595-a172-2a6c709937d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.622466 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4z5q\" (UniqueName: \"kubernetes.io/projected/e7ac2e97-5bf9-4595-a172-2a6c709937d0-kube-api-access-m4z5q\") on node \"crc\" DevicePath \"\"" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.746663 4903 generic.go:334] "Generic (PLEG): container finished" podID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerID="9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a" exitCode=0 Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.746718 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwst4" event={"ID":"e7ac2e97-5bf9-4595-a172-2a6c709937d0","Type":"ContainerDied","Data":"9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a"} Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.746725 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pwst4" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.746745 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pwst4" event={"ID":"e7ac2e97-5bf9-4595-a172-2a6c709937d0","Type":"ContainerDied","Data":"e02643d8518679b5ae62eb1ed895ee98b07213888cbea2013329f252a0f1a3b3"} Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.746764 4903 scope.go:117] "RemoveContainer" containerID="9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.785089 4903 scope.go:117] "RemoveContainer" containerID="63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.808144 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pwst4"] Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.821229 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pwst4"] Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.824675 4903 scope.go:117] "RemoveContainer" containerID="69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.872718 4903 scope.go:117] "RemoveContainer" containerID="9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a" Jan 28 17:49:36 crc kubenswrapper[4903]: E0128 17:49:36.874232 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a\": container with ID starting with 9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a not found: ID does not exist" containerID="9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.874268 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a"} err="failed to get container status \"9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a\": rpc error: code = NotFound desc = could not find container \"9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a\": container with ID starting with 9b2a6680e007c0b63755afe3d2c80a68320a662de720329baa50c15e9d74026a not found: ID does not exist" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.874293 4903 scope.go:117] "RemoveContainer" containerID="63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355" Jan 28 17:49:36 crc kubenswrapper[4903]: E0128 17:49:36.874699 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355\": container with ID starting with 63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355 not found: ID does not exist" containerID="63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.874753 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355"} err="failed to get container status \"63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355\": rpc error: code = NotFound desc = could not find container \"63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355\": container with ID starting with 63b58f1d59fe0f45fe1cf4e4326188df06979cf6ad01e245393f1c75718f4355 not found: ID does not exist" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.874779 4903 scope.go:117] "RemoveContainer" containerID="69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9" Jan 28 17:49:36 crc kubenswrapper[4903]: E0128 17:49:36.875265 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9\": container with ID starting with 69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9 not found: ID does not exist" containerID="69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9" Jan 28 17:49:36 crc kubenswrapper[4903]: I0128 17:49:36.875299 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9"} err="failed to get container status \"69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9\": rpc error: code = NotFound desc = could not find container \"69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9\": container with ID starting with 69f7a351319ea9fbc747e768b293e66cb3e2a0b20b648fe1c99e9707c2ef11f9 not found: ID does not exist" Jan 28 17:49:38 crc kubenswrapper[4903]: I0128 17:49:38.298438 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:49:38 crc kubenswrapper[4903]: I0128 17:49:38.359305 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:49:38 crc kubenswrapper[4903]: I0128 17:49:38.430571 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" path="/var/lib/kubelet/pods/e7ac2e97-5bf9-4595-a172-2a6c709937d0/volumes" Jan 28 17:49:38 crc kubenswrapper[4903]: I0128 17:49:38.852295 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ckhsv" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:38 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:38 crc kubenswrapper[4903]: > Jan 28 17:49:40 crc kubenswrapper[4903]: I0128 17:49:40.359644 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndwz9"] Jan 28 17:49:40 crc kubenswrapper[4903]: I0128 17:49:40.360229 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ndwz9" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="registry-server" containerID="cri-o://dd172f698044bbb537da7a8a6dc4faf031145ff78db923798baba747fb3d6b84" gracePeriod=2 Jan 28 17:49:41 crc kubenswrapper[4903]: I0128 17:49:41.804382 4903 generic.go:334] "Generic (PLEG): container finished" podID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerID="dd172f698044bbb537da7a8a6dc4faf031145ff78db923798baba747fb3d6b84" exitCode=0 Jan 28 17:49:41 crc kubenswrapper[4903]: I0128 17:49:41.804478 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndwz9" event={"ID":"4d0dca59-e30b-427e-9ac3-d5df0051235f","Type":"ContainerDied","Data":"dd172f698044bbb537da7a8a6dc4faf031145ff78db923798baba747fb3d6b84"} Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.316429 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.364090 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49jmf\" (UniqueName: \"kubernetes.io/projected/4d0dca59-e30b-427e-9ac3-d5df0051235f-kube-api-access-49jmf\") pod \"4d0dca59-e30b-427e-9ac3-d5df0051235f\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.364190 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-utilities\") pod \"4d0dca59-e30b-427e-9ac3-d5df0051235f\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.364397 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-catalog-content\") pod \"4d0dca59-e30b-427e-9ac3-d5df0051235f\" (UID: \"4d0dca59-e30b-427e-9ac3-d5df0051235f\") " Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.365230 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-utilities" (OuterVolumeSpecName: "utilities") pod "4d0dca59-e30b-427e-9ac3-d5df0051235f" (UID: "4d0dca59-e30b-427e-9ac3-d5df0051235f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.372689 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0dca59-e30b-427e-9ac3-d5df0051235f-kube-api-access-49jmf" (OuterVolumeSpecName: "kube-api-access-49jmf") pod "4d0dca59-e30b-427e-9ac3-d5df0051235f" (UID: "4d0dca59-e30b-427e-9ac3-d5df0051235f"). InnerVolumeSpecName "kube-api-access-49jmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.424625 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d0dca59-e30b-427e-9ac3-d5df0051235f" (UID: "4d0dca59-e30b-427e-9ac3-d5df0051235f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.467371 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49jmf\" (UniqueName: \"kubernetes.io/projected/4d0dca59-e30b-427e-9ac3-d5df0051235f-kube-api-access-49jmf\") on node \"crc\" DevicePath \"\"" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.467420 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.467430 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0dca59-e30b-427e-9ac3-d5df0051235f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.816177 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ndwz9" event={"ID":"4d0dca59-e30b-427e-9ac3-d5df0051235f","Type":"ContainerDied","Data":"6dd739a2b6b2b93d264234fae021801ab49801c57b3986fa730fd72e37afa9ab"} Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.816234 4903 scope.go:117] "RemoveContainer" containerID="dd172f698044bbb537da7a8a6dc4faf031145ff78db923798baba747fb3d6b84" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.816250 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ndwz9" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.844985 4903 scope.go:117] "RemoveContainer" containerID="796f38a85fc735e7f4599ca443b6c02c2ef2f32b3ab61735ddd764b506812176" Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.848243 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ndwz9"] Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.859205 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ndwz9"] Jan 28 17:49:42 crc kubenswrapper[4903]: I0128 17:49:42.868279 4903 scope.go:117] "RemoveContainer" containerID="177d84d3c84d4307816aa016f2fa6467f99165f4b07f90b383f0783fe4076133" Jan 28 17:49:44 crc kubenswrapper[4903]: I0128 17:49:44.426442 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" path="/var/lib/kubelet/pods/4d0dca59-e30b-427e-9ac3-d5df0051235f/volumes" Jan 28 17:49:48 crc kubenswrapper[4903]: I0128 17:49:48.852695 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ckhsv" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:48 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:48 crc kubenswrapper[4903]: > Jan 28 17:49:58 crc kubenswrapper[4903]: I0128 17:49:58.845403 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ckhsv" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" probeResult="failure" output=< Jan 28 17:49:58 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:49:58 crc kubenswrapper[4903]: > Jan 28 17:50:08 crc kubenswrapper[4903]: I0128 17:50:08.853072 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ckhsv" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" probeResult="failure" output=< Jan 28 17:50:08 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 17:50:08 crc kubenswrapper[4903]: > Jan 28 17:50:17 crc kubenswrapper[4903]: I0128 17:50:17.859673 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:50:17 crc kubenswrapper[4903]: I0128 17:50:17.926885 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:50:18 crc kubenswrapper[4903]: I0128 17:50:18.099243 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ckhsv"] Jan 28 17:50:19 crc kubenswrapper[4903]: I0128 17:50:19.165517 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ckhsv" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" containerID="cri-o://58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e" gracePeriod=2 Jan 28 17:50:19 crc kubenswrapper[4903]: I0128 17:50:19.756452 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:50:19 crc kubenswrapper[4903]: I0128 17:50:19.920161 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-utilities\") pod \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " Jan 28 17:50:19 crc kubenswrapper[4903]: I0128 17:50:19.920276 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lv9m\" (UniqueName: \"kubernetes.io/projected/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-kube-api-access-9lv9m\") pod \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " Jan 28 17:50:19 crc kubenswrapper[4903]: I0128 17:50:19.920341 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-catalog-content\") pod \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\" (UID: \"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8\") " Jan 28 17:50:19 crc kubenswrapper[4903]: I0128 17:50:19.921489 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-utilities" (OuterVolumeSpecName: "utilities") pod "bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" (UID: "bbce4a8c-eca8-4c6f-8942-f757efd9ebd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:50:19 crc kubenswrapper[4903]: I0128 17:50:19.928307 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-kube-api-access-9lv9m" (OuterVolumeSpecName: "kube-api-access-9lv9m") pod "bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" (UID: "bbce4a8c-eca8-4c6f-8942-f757efd9ebd8"). InnerVolumeSpecName "kube-api-access-9lv9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.023191 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.023232 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lv9m\" (UniqueName: \"kubernetes.io/projected/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-kube-api-access-9lv9m\") on node \"crc\" DevicePath \"\"" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.075227 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" (UID: "bbce4a8c-eca8-4c6f-8942-f757efd9ebd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.126812 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.182126 4903 generic.go:334] "Generic (PLEG): container finished" podID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerID="58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e" exitCode=0 Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.182202 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckhsv" event={"ID":"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8","Type":"ContainerDied","Data":"58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e"} Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.182210 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckhsv" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.182252 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckhsv" event={"ID":"bbce4a8c-eca8-4c6f-8942-f757efd9ebd8","Type":"ContainerDied","Data":"8eca76aaf49d1606cc3f40a36fd2698bd51a9f44d9c129bab01e002a864ca487"} Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.182281 4903 scope.go:117] "RemoveContainer" containerID="58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.210050 4903 scope.go:117] "RemoveContainer" containerID="7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.234682 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ckhsv"] Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.254854 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ckhsv"] Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.265037 4903 scope.go:117] "RemoveContainer" containerID="cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.293022 4903 scope.go:117] "RemoveContainer" containerID="58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e" Jan 28 17:50:20 crc kubenswrapper[4903]: E0128 17:50:20.293608 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e\": container with ID starting with 58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e not found: ID does not exist" containerID="58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.293645 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e"} err="failed to get container status \"58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e\": rpc error: code = NotFound desc = could not find container \"58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e\": container with ID starting with 58470b4686a3f2bc53cba6c822d1cd89e556d0e6fe7d747478adf96a0650603e not found: ID does not exist" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.293665 4903 scope.go:117] "RemoveContainer" containerID="7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc" Jan 28 17:50:20 crc kubenswrapper[4903]: E0128 17:50:20.293997 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc\": container with ID starting with 7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc not found: ID does not exist" containerID="7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.294027 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc"} err="failed to get container status \"7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc\": rpc error: code = NotFound desc = could not find container \"7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc\": container with ID starting with 7c43aa101ef2ebd84b36a896e08c3944ac743029816c44fc7d2cbc5469eac1bc not found: ID does not exist" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.294046 4903 scope.go:117] "RemoveContainer" containerID="cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d" Jan 28 17:50:20 crc kubenswrapper[4903]: E0128 17:50:20.294388 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d\": container with ID starting with cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d not found: ID does not exist" containerID="cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.294423 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d"} err="failed to get container status \"cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d\": rpc error: code = NotFound desc = could not find container \"cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d\": container with ID starting with cd66e72e299c2974e8f4007ede71421bd5da6abe068abf58eef41d6eda18ff6d not found: ID does not exist" Jan 28 17:50:20 crc kubenswrapper[4903]: I0128 17:50:20.430372 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" path="/var/lib/kubelet/pods/bbce4a8c-eca8-4c6f-8942-f757efd9ebd8/volumes" Jan 28 17:50:40 crc kubenswrapper[4903]: I0128 17:50:40.362454 4903 generic.go:334] "Generic (PLEG): container finished" podID="4b970ca2-2eb3-43db-a58e-624d275ecf17" containerID="e81804d02eb5af352af58e1b28c7143c874ee750b003132533c94ef627602256" exitCode=0 Jan 28 17:50:40 crc kubenswrapper[4903]: I0128 17:50:40.362556 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" event={"ID":"4b970ca2-2eb3-43db-a58e-624d275ecf17","Type":"ContainerDied","Data":"e81804d02eb5af352af58e1b28c7143c874ee750b003132533c94ef627602256"} Jan 28 17:50:41 crc kubenswrapper[4903]: I0128 17:50:41.855002 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.030892 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-ssh-key-openstack-cell1\") pod \"4b970ca2-2eb3-43db-a58e-624d275ecf17\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.030966 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-bootstrap-combined-ca-bundle\") pod \"4b970ca2-2eb3-43db-a58e-624d275ecf17\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.030993 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-inventory\") pod \"4b970ca2-2eb3-43db-a58e-624d275ecf17\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.031088 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6wjt\" (UniqueName: \"kubernetes.io/projected/4b970ca2-2eb3-43db-a58e-624d275ecf17-kube-api-access-r6wjt\") pod \"4b970ca2-2eb3-43db-a58e-624d275ecf17\" (UID: \"4b970ca2-2eb3-43db-a58e-624d275ecf17\") " Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.037775 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b970ca2-2eb3-43db-a58e-624d275ecf17-kube-api-access-r6wjt" (OuterVolumeSpecName: "kube-api-access-r6wjt") pod "4b970ca2-2eb3-43db-a58e-624d275ecf17" (UID: "4b970ca2-2eb3-43db-a58e-624d275ecf17"). InnerVolumeSpecName "kube-api-access-r6wjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.038658 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4b970ca2-2eb3-43db-a58e-624d275ecf17" (UID: "4b970ca2-2eb3-43db-a58e-624d275ecf17"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.063546 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-inventory" (OuterVolumeSpecName: "inventory") pod "4b970ca2-2eb3-43db-a58e-624d275ecf17" (UID: "4b970ca2-2eb3-43db-a58e-624d275ecf17"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.065072 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "4b970ca2-2eb3-43db-a58e-624d275ecf17" (UID: "4b970ca2-2eb3-43db-a58e-624d275ecf17"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.133758 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.133807 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6wjt\" (UniqueName: \"kubernetes.io/projected/4b970ca2-2eb3-43db-a58e-624d275ecf17-kube-api-access-r6wjt\") on node \"crc\" DevicePath \"\"" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.133821 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.133835 4903 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b970ca2-2eb3-43db-a58e-624d275ecf17-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.380904 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" event={"ID":"4b970ca2-2eb3-43db-a58e-624d275ecf17","Type":"ContainerDied","Data":"a918577c6009a4d346344f3d6ff78fc3aa2080c50e2ce80d7e501fb561949ec4"} Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.381252 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a918577c6009a4d346344f3d6ff78fc3aa2080c50e2ce80d7e501fb561949ec4" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.380969 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-psvpm" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.489193 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-8ljnv"] Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.490142 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="extract-utilities" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.490216 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="extract-utilities" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.490302 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.490369 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.490442 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="extract-content" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.490497 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="extract-content" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.490646 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.490715 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.490783 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="extract-utilities" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.490850 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="extract-utilities" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.490938 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.491011 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.491088 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="extract-utilities" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.491159 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="extract-utilities" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.491240 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="extract-content" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.491319 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="extract-content" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.491402 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="extract-content" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.491468 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="extract-content" Jan 28 17:50:42 crc kubenswrapper[4903]: E0128 17:50:42.491562 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b970ca2-2eb3-43db-a58e-624d275ecf17" containerName="bootstrap-openstack-openstack-cell1" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.491629 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b970ca2-2eb3-43db-a58e-624d275ecf17" containerName="bootstrap-openstack-openstack-cell1" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.491953 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b970ca2-2eb3-43db-a58e-624d275ecf17" containerName="bootstrap-openstack-openstack-cell1" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.492048 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0dca59-e30b-427e-9ac3-d5df0051235f" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.492154 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7ac2e97-5bf9-4595-a172-2a6c709937d0" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.492244 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbce4a8c-eca8-4c6f-8942-f757efd9ebd8" containerName="registry-server" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.493226 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.495908 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.496055 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.497436 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.497674 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.499304 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-8ljnv"] Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.652339 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.652517 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hvk\" (UniqueName: \"kubernetes.io/projected/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-kube-api-access-n6hvk\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.653101 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-inventory\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.755322 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.755394 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6hvk\" (UniqueName: \"kubernetes.io/projected/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-kube-api-access-n6hvk\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.755565 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-inventory\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.760826 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-inventory\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.762204 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.775171 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6hvk\" (UniqueName: \"kubernetes.io/projected/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-kube-api-access-n6hvk\") pod \"download-cache-openstack-openstack-cell1-8ljnv\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:42 crc kubenswrapper[4903]: I0128 17:50:42.815088 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:50:43 crc kubenswrapper[4903]: I0128 17:50:43.369349 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-8ljnv"] Jan 28 17:50:43 crc kubenswrapper[4903]: I0128 17:50:43.394298 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" event={"ID":"92b3d5b3-e36e-45ea-9b52-bc461b664ca6","Type":"ContainerStarted","Data":"ab399112ad390ec3a273ea7ad1bf49133688f65ae76eb1c5b752ef2f78c65ae0"} Jan 28 17:50:44 crc kubenswrapper[4903]: I0128 17:50:44.404687 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" event={"ID":"92b3d5b3-e36e-45ea-9b52-bc461b664ca6","Type":"ContainerStarted","Data":"815959f396f7dc5b5b3aae03dde8d08fd3bc36b95fe72614aeb7f56b513902b3"} Jan 28 17:50:44 crc kubenswrapper[4903]: I0128 17:50:44.435929 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" podStartSLOduration=1.8968706119999998 podStartE2EDuration="2.435902337s" podCreationTimestamp="2026-01-28 17:50:42 +0000 UTC" firstStartedPulling="2026-01-28 17:50:43.375022528 +0000 UTC m=+7515.650994039" lastFinishedPulling="2026-01-28 17:50:43.914054233 +0000 UTC m=+7516.190025764" observedRunningTime="2026-01-28 17:50:44.428787662 +0000 UTC m=+7516.704759183" watchObservedRunningTime="2026-01-28 17:50:44.435902337 +0000 UTC m=+7516.711873848" Jan 28 17:51:56 crc kubenswrapper[4903]: I0128 17:51:56.614245 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:51:56 crc kubenswrapper[4903]: I0128 17:51:56.614853 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:52:15 crc kubenswrapper[4903]: I0128 17:52:15.481248 4903 generic.go:334] "Generic (PLEG): container finished" podID="92b3d5b3-e36e-45ea-9b52-bc461b664ca6" containerID="815959f396f7dc5b5b3aae03dde8d08fd3bc36b95fe72614aeb7f56b513902b3" exitCode=0 Jan 28 17:52:15 crc kubenswrapper[4903]: I0128 17:52:15.481331 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" event={"ID":"92b3d5b3-e36e-45ea-9b52-bc461b664ca6","Type":"ContainerDied","Data":"815959f396f7dc5b5b3aae03dde8d08fd3bc36b95fe72614aeb7f56b513902b3"} Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.002567 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.082699 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-inventory\") pod \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.082787 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6hvk\" (UniqueName: \"kubernetes.io/projected/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-kube-api-access-n6hvk\") pod \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.082942 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-ssh-key-openstack-cell1\") pod \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\" (UID: \"92b3d5b3-e36e-45ea-9b52-bc461b664ca6\") " Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.105542 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-kube-api-access-n6hvk" (OuterVolumeSpecName: "kube-api-access-n6hvk") pod "92b3d5b3-e36e-45ea-9b52-bc461b664ca6" (UID: "92b3d5b3-e36e-45ea-9b52-bc461b664ca6"). InnerVolumeSpecName "kube-api-access-n6hvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.118087 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "92b3d5b3-e36e-45ea-9b52-bc461b664ca6" (UID: "92b3d5b3-e36e-45ea-9b52-bc461b664ca6"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.132341 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-inventory" (OuterVolumeSpecName: "inventory") pod "92b3d5b3-e36e-45ea-9b52-bc461b664ca6" (UID: "92b3d5b3-e36e-45ea-9b52-bc461b664ca6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.185772 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.185821 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6hvk\" (UniqueName: \"kubernetes.io/projected/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-kube-api-access-n6hvk\") on node \"crc\" DevicePath \"\"" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.185832 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/92b3d5b3-e36e-45ea-9b52-bc461b664ca6-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.502381 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" event={"ID":"92b3d5b3-e36e-45ea-9b52-bc461b664ca6","Type":"ContainerDied","Data":"ab399112ad390ec3a273ea7ad1bf49133688f65ae76eb1c5b752ef2f78c65ae0"} Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.502774 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab399112ad390ec3a273ea7ad1bf49133688f65ae76eb1c5b752ef2f78c65ae0" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.502726 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-8ljnv" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.604646 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-28qjh"] Jan 28 17:52:17 crc kubenswrapper[4903]: E0128 17:52:17.605135 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b3d5b3-e36e-45ea-9b52-bc461b664ca6" containerName="download-cache-openstack-openstack-cell1" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.605156 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b3d5b3-e36e-45ea-9b52-bc461b664ca6" containerName="download-cache-openstack-openstack-cell1" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.605424 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b3d5b3-e36e-45ea-9b52-bc461b664ca6" containerName="download-cache-openstack-openstack-cell1" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.606183 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.608866 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.609089 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.609282 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.609452 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.615609 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-28qjh"] Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.698412 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-inventory\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.698472 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.698654 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rpg\" (UniqueName: \"kubernetes.io/projected/1ee42dda-7044-47d0-b631-327f05260f01-kube-api-access-64rpg\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.800743 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-inventory\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.800798 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.800845 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rpg\" (UniqueName: \"kubernetes.io/projected/1ee42dda-7044-47d0-b631-327f05260f01-kube-api-access-64rpg\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.806146 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.815993 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-inventory\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.831471 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rpg\" (UniqueName: \"kubernetes.io/projected/1ee42dda-7044-47d0-b631-327f05260f01-kube-api-access-64rpg\") pod \"configure-network-openstack-openstack-cell1-28qjh\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:17 crc kubenswrapper[4903]: I0128 17:52:17.924116 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:52:18 crc kubenswrapper[4903]: I0128 17:52:18.721693 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-28qjh"] Jan 28 17:52:19 crc kubenswrapper[4903]: I0128 17:52:19.556228 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" event={"ID":"1ee42dda-7044-47d0-b631-327f05260f01","Type":"ContainerStarted","Data":"61c7b1426c040b64fc7d238e923647899db060f2e27540cf32746ec6549b959d"} Jan 28 17:52:20 crc kubenswrapper[4903]: I0128 17:52:20.564670 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" event={"ID":"1ee42dda-7044-47d0-b631-327f05260f01","Type":"ContainerStarted","Data":"e6aead5d9929863a57dbeb4ffdcf1f304515661a09a17fc929d1d5627df009af"} Jan 28 17:52:20 crc kubenswrapper[4903]: I0128 17:52:20.588683 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" podStartSLOduration=2.736837343 podStartE2EDuration="3.588661213s" podCreationTimestamp="2026-01-28 17:52:17 +0000 UTC" firstStartedPulling="2026-01-28 17:52:18.72368254 +0000 UTC m=+7610.999654051" lastFinishedPulling="2026-01-28 17:52:19.57550641 +0000 UTC m=+7611.851477921" observedRunningTime="2026-01-28 17:52:20.580925761 +0000 UTC m=+7612.856897282" watchObservedRunningTime="2026-01-28 17:52:20.588661213 +0000 UTC m=+7612.864632724" Jan 28 17:52:26 crc kubenswrapper[4903]: I0128 17:52:26.614100 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:52:26 crc kubenswrapper[4903]: I0128 17:52:26.614737 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.613414 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.613968 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.614020 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.614790 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e6802290420c6d59f256a6272c07630904cfdec2373baad19af691305312c46"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.614856 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://8e6802290420c6d59f256a6272c07630904cfdec2373baad19af691305312c46" gracePeriod=600 Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.960039 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="8e6802290420c6d59f256a6272c07630904cfdec2373baad19af691305312c46" exitCode=0 Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.960366 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"8e6802290420c6d59f256a6272c07630904cfdec2373baad19af691305312c46"} Jan 28 17:52:56 crc kubenswrapper[4903]: I0128 17:52:56.960398 4903 scope.go:117] "RemoveContainer" containerID="818ac4d7202a18309c056490a84d6371289cb3a20dd0b642fde3b9a2cc84bf65" Jan 28 17:52:57 crc kubenswrapper[4903]: I0128 17:52:57.971375 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed"} Jan 28 17:53:42 crc kubenswrapper[4903]: I0128 17:53:42.348873 4903 generic.go:334] "Generic (PLEG): container finished" podID="1ee42dda-7044-47d0-b631-327f05260f01" containerID="e6aead5d9929863a57dbeb4ffdcf1f304515661a09a17fc929d1d5627df009af" exitCode=0 Jan 28 17:53:42 crc kubenswrapper[4903]: I0128 17:53:42.348978 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" event={"ID":"1ee42dda-7044-47d0-b631-327f05260f01","Type":"ContainerDied","Data":"e6aead5d9929863a57dbeb4ffdcf1f304515661a09a17fc929d1d5627df009af"} Jan 28 17:53:43 crc kubenswrapper[4903]: I0128 17:53:43.829553 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:53:43 crc kubenswrapper[4903]: I0128 17:53:43.974464 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-ssh-key-openstack-cell1\") pod \"1ee42dda-7044-47d0-b631-327f05260f01\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " Jan 28 17:53:43 crc kubenswrapper[4903]: I0128 17:53:43.974768 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64rpg\" (UniqueName: \"kubernetes.io/projected/1ee42dda-7044-47d0-b631-327f05260f01-kube-api-access-64rpg\") pod \"1ee42dda-7044-47d0-b631-327f05260f01\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " Jan 28 17:53:43 crc kubenswrapper[4903]: I0128 17:53:43.974867 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-inventory\") pod \"1ee42dda-7044-47d0-b631-327f05260f01\" (UID: \"1ee42dda-7044-47d0-b631-327f05260f01\") " Jan 28 17:53:43 crc kubenswrapper[4903]: I0128 17:53:43.980266 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee42dda-7044-47d0-b631-327f05260f01-kube-api-access-64rpg" (OuterVolumeSpecName: "kube-api-access-64rpg") pod "1ee42dda-7044-47d0-b631-327f05260f01" (UID: "1ee42dda-7044-47d0-b631-327f05260f01"). InnerVolumeSpecName "kube-api-access-64rpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.004187 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-inventory" (OuterVolumeSpecName: "inventory") pod "1ee42dda-7044-47d0-b631-327f05260f01" (UID: "1ee42dda-7044-47d0-b631-327f05260f01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.009163 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1ee42dda-7044-47d0-b631-327f05260f01" (UID: "1ee42dda-7044-47d0-b631-327f05260f01"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.077379 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.077423 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64rpg\" (UniqueName: \"kubernetes.io/projected/1ee42dda-7044-47d0-b631-327f05260f01-kube-api-access-64rpg\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.077437 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ee42dda-7044-47d0-b631-327f05260f01-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.369094 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" event={"ID":"1ee42dda-7044-47d0-b631-327f05260f01","Type":"ContainerDied","Data":"61c7b1426c040b64fc7d238e923647899db060f2e27540cf32746ec6549b959d"} Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.369140 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c7b1426c040b64fc7d238e923647899db060f2e27540cf32746ec6549b959d" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.369167 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-28qjh" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.464979 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-89jf8"] Jan 28 17:53:44 crc kubenswrapper[4903]: E0128 17:53:44.465615 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee42dda-7044-47d0-b631-327f05260f01" containerName="configure-network-openstack-openstack-cell1" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.465636 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee42dda-7044-47d0-b631-327f05260f01" containerName="configure-network-openstack-openstack-cell1" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.465932 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee42dda-7044-47d0-b631-327f05260f01" containerName="configure-network-openstack-openstack-cell1" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.466939 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.470476 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.470506 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.470553 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.470906 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.476168 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-89jf8"] Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.596908 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.597109 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-inventory\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.597218 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kk8\" (UniqueName: \"kubernetes.io/projected/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-kube-api-access-v4kk8\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.698913 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.699066 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-inventory\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.699165 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kk8\" (UniqueName: \"kubernetes.io/projected/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-kube-api-access-v4kk8\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.704408 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.716543 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-inventory\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.724613 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kk8\" (UniqueName: \"kubernetes.io/projected/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-kube-api-access-v4kk8\") pod \"validate-network-openstack-openstack-cell1-89jf8\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:44 crc kubenswrapper[4903]: I0128 17:53:44.840118 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:45 crc kubenswrapper[4903]: I0128 17:53:45.375563 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-89jf8"] Jan 28 17:53:45 crc kubenswrapper[4903]: I0128 17:53:45.380696 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:53:46 crc kubenswrapper[4903]: I0128 17:53:46.392481 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" event={"ID":"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4","Type":"ContainerStarted","Data":"6e954bd35dd9419cbe0793ab081dbafcf253c54cc0c27fd0ab0797669228d47d"} Jan 28 17:53:47 crc kubenswrapper[4903]: I0128 17:53:47.407705 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" event={"ID":"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4","Type":"ContainerStarted","Data":"2352f222a49be1fbd317d67ac11faf36d512745ae5b5a5c597eb760b846e0654"} Jan 28 17:53:47 crc kubenswrapper[4903]: I0128 17:53:47.426621 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" podStartSLOduration=2.617675597 podStartE2EDuration="3.426600074s" podCreationTimestamp="2026-01-28 17:53:44 +0000 UTC" firstStartedPulling="2026-01-28 17:53:45.380443835 +0000 UTC m=+7697.656415346" lastFinishedPulling="2026-01-28 17:53:46.189368312 +0000 UTC m=+7698.465339823" observedRunningTime="2026-01-28 17:53:47.421287218 +0000 UTC m=+7699.697258759" watchObservedRunningTime="2026-01-28 17:53:47.426600074 +0000 UTC m=+7699.702571605" Jan 28 17:53:51 crc kubenswrapper[4903]: I0128 17:53:51.447160 4903 generic.go:334] "Generic (PLEG): container finished" podID="0d8d93d1-80e0-4a45-a155-a8ccd842b7c4" containerID="2352f222a49be1fbd317d67ac11faf36d512745ae5b5a5c597eb760b846e0654" exitCode=0 Jan 28 17:53:51 crc kubenswrapper[4903]: I0128 17:53:51.447230 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" event={"ID":"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4","Type":"ContainerDied","Data":"2352f222a49be1fbd317d67ac11faf36d512745ae5b5a5c597eb760b846e0654"} Jan 28 17:53:52 crc kubenswrapper[4903]: I0128 17:53:52.899803 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:52 crc kubenswrapper[4903]: I0128 17:53:52.985510 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4kk8\" (UniqueName: \"kubernetes.io/projected/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-kube-api-access-v4kk8\") pod \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " Jan 28 17:53:52 crc kubenswrapper[4903]: I0128 17:53:52.985778 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-ssh-key-openstack-cell1\") pod \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " Jan 28 17:53:52 crc kubenswrapper[4903]: I0128 17:53:52.985974 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-inventory\") pod \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\" (UID: \"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4\") " Jan 28 17:53:52 crc kubenswrapper[4903]: I0128 17:53:52.992843 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-kube-api-access-v4kk8" (OuterVolumeSpecName: "kube-api-access-v4kk8") pod "0d8d93d1-80e0-4a45-a155-a8ccd842b7c4" (UID: "0d8d93d1-80e0-4a45-a155-a8ccd842b7c4"). InnerVolumeSpecName "kube-api-access-v4kk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.015298 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0d8d93d1-80e0-4a45-a155-a8ccd842b7c4" (UID: "0d8d93d1-80e0-4a45-a155-a8ccd842b7c4"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.021778 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-inventory" (OuterVolumeSpecName: "inventory") pod "0d8d93d1-80e0-4a45-a155-a8ccd842b7c4" (UID: "0d8d93d1-80e0-4a45-a155-a8ccd842b7c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.088248 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.088290 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4kk8\" (UniqueName: \"kubernetes.io/projected/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-kube-api-access-v4kk8\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.088305 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0d8d93d1-80e0-4a45-a155-a8ccd842b7c4-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.471358 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" event={"ID":"0d8d93d1-80e0-4a45-a155-a8ccd842b7c4","Type":"ContainerDied","Data":"6e954bd35dd9419cbe0793ab081dbafcf253c54cc0c27fd0ab0797669228d47d"} Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.471411 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e954bd35dd9419cbe0793ab081dbafcf253c54cc0c27fd0ab0797669228d47d" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.471512 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-89jf8" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.566283 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-k7qqd"] Jan 28 17:53:53 crc kubenswrapper[4903]: E0128 17:53:53.566797 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d93d1-80e0-4a45-a155-a8ccd842b7c4" containerName="validate-network-openstack-openstack-cell1" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.566828 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d93d1-80e0-4a45-a155-a8ccd842b7c4" containerName="validate-network-openstack-openstack-cell1" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.567087 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8d93d1-80e0-4a45-a155-a8ccd842b7c4" containerName="validate-network-openstack-openstack-cell1" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.567833 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.570120 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.570258 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.570345 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.570634 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.589970 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-k7qqd"] Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.701273 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.701795 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-inventory\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.701873 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n76xn\" (UniqueName: \"kubernetes.io/projected/28f0bf11-1db3-4c15-b7ce-292f773088c1-kube-api-access-n76xn\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.803122 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.803185 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-inventory\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.803241 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n76xn\" (UniqueName: \"kubernetes.io/projected/28f0bf11-1db3-4c15-b7ce-292f773088c1-kube-api-access-n76xn\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.807222 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-inventory\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.812154 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.826372 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n76xn\" (UniqueName: \"kubernetes.io/projected/28f0bf11-1db3-4c15-b7ce-292f773088c1-kube-api-access-n76xn\") pod \"install-os-openstack-openstack-cell1-k7qqd\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:53 crc kubenswrapper[4903]: I0128 17:53:53.886702 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:53:54 crc kubenswrapper[4903]: I0128 17:53:54.399798 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-k7qqd"] Jan 28 17:53:54 crc kubenswrapper[4903]: I0128 17:53:54.495730 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" event={"ID":"28f0bf11-1db3-4c15-b7ce-292f773088c1","Type":"ContainerStarted","Data":"68c54baa82a16042c6d9a4e627743ed85c057872673d09df13c19de3b0640da6"} Jan 28 17:53:55 crc kubenswrapper[4903]: I0128 17:53:55.510388 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" event={"ID":"28f0bf11-1db3-4c15-b7ce-292f773088c1","Type":"ContainerStarted","Data":"9fb19e48f0b3c20d5542de8a8dfd4f6ad969e8175153e9e10180c042fb72f4b4"} Jan 28 17:53:55 crc kubenswrapper[4903]: I0128 17:53:55.543289 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" podStartSLOduration=2.021299006 podStartE2EDuration="2.543218602s" podCreationTimestamp="2026-01-28 17:53:53 +0000 UTC" firstStartedPulling="2026-01-28 17:53:54.404641958 +0000 UTC m=+7706.680613459" lastFinishedPulling="2026-01-28 17:53:54.926561544 +0000 UTC m=+7707.202533055" observedRunningTime="2026-01-28 17:53:55.535048188 +0000 UTC m=+7707.811019699" watchObservedRunningTime="2026-01-28 17:53:55.543218602 +0000 UTC m=+7707.819190123" Jan 28 17:54:39 crc kubenswrapper[4903]: I0128 17:54:39.906974 4903 generic.go:334] "Generic (PLEG): container finished" podID="28f0bf11-1db3-4c15-b7ce-292f773088c1" containerID="9fb19e48f0b3c20d5542de8a8dfd4f6ad969e8175153e9e10180c042fb72f4b4" exitCode=0 Jan 28 17:54:39 crc kubenswrapper[4903]: I0128 17:54:39.907085 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" event={"ID":"28f0bf11-1db3-4c15-b7ce-292f773088c1","Type":"ContainerDied","Data":"9fb19e48f0b3c20d5542de8a8dfd4f6ad969e8175153e9e10180c042fb72f4b4"} Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.421209 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.457254 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-ssh-key-openstack-cell1\") pod \"28f0bf11-1db3-4c15-b7ce-292f773088c1\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.457384 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n76xn\" (UniqueName: \"kubernetes.io/projected/28f0bf11-1db3-4c15-b7ce-292f773088c1-kube-api-access-n76xn\") pod \"28f0bf11-1db3-4c15-b7ce-292f773088c1\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.457411 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-inventory\") pod \"28f0bf11-1db3-4c15-b7ce-292f773088c1\" (UID: \"28f0bf11-1db3-4c15-b7ce-292f773088c1\") " Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.463593 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28f0bf11-1db3-4c15-b7ce-292f773088c1-kube-api-access-n76xn" (OuterVolumeSpecName: "kube-api-access-n76xn") pod "28f0bf11-1db3-4c15-b7ce-292f773088c1" (UID: "28f0bf11-1db3-4c15-b7ce-292f773088c1"). InnerVolumeSpecName "kube-api-access-n76xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.492700 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-inventory" (OuterVolumeSpecName: "inventory") pod "28f0bf11-1db3-4c15-b7ce-292f773088c1" (UID: "28f0bf11-1db3-4c15-b7ce-292f773088c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.495752 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "28f0bf11-1db3-4c15-b7ce-292f773088c1" (UID: "28f0bf11-1db3-4c15-b7ce-292f773088c1"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.560198 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.560626 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n76xn\" (UniqueName: \"kubernetes.io/projected/28f0bf11-1db3-4c15-b7ce-292f773088c1-kube-api-access-n76xn\") on node \"crc\" DevicePath \"\"" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.560721 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28f0bf11-1db3-4c15-b7ce-292f773088c1-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.931253 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" event={"ID":"28f0bf11-1db3-4c15-b7ce-292f773088c1","Type":"ContainerDied","Data":"68c54baa82a16042c6d9a4e627743ed85c057872673d09df13c19de3b0640da6"} Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.931317 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c54baa82a16042c6d9a4e627743ed85c057872673d09df13c19de3b0640da6" Jan 28 17:54:41 crc kubenswrapper[4903]: I0128 17:54:41.931403 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k7qqd" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.045556 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-5vjln"] Jan 28 17:54:42 crc kubenswrapper[4903]: E0128 17:54:42.045955 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28f0bf11-1db3-4c15-b7ce-292f773088c1" containerName="install-os-openstack-openstack-cell1" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.045976 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="28f0bf11-1db3-4c15-b7ce-292f773088c1" containerName="install-os-openstack-openstack-cell1" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.046203 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="28f0bf11-1db3-4c15-b7ce-292f773088c1" containerName="install-os-openstack-openstack-cell1" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.046979 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.050443 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.050722 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.050722 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.056430 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.056646 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-5vjln"] Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.174007 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.174160 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-inventory\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.174320 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlf4w\" (UniqueName: \"kubernetes.io/projected/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-kube-api-access-hlf4w\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.275884 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlf4w\" (UniqueName: \"kubernetes.io/projected/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-kube-api-access-hlf4w\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.275991 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.276076 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-inventory\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.280723 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.281051 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-inventory\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.290727 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlf4w\" (UniqueName: \"kubernetes.io/projected/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-kube-api-access-hlf4w\") pod \"configure-os-openstack-openstack-cell1-5vjln\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.368181 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.923484 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-5vjln"] Jan 28 17:54:42 crc kubenswrapper[4903]: I0128 17:54:42.940264 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" event={"ID":"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e","Type":"ContainerStarted","Data":"da5d1d6f0820c2ecb0dac572e690c69553045af663f27a1b824fc25aa39f2c31"} Jan 28 17:54:43 crc kubenswrapper[4903]: I0128 17:54:43.955470 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" event={"ID":"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e","Type":"ContainerStarted","Data":"12a957bc4c5f688222e5617c110495e1425ea1f57890cc0385bc31332053e2e1"} Jan 28 17:54:43 crc kubenswrapper[4903]: I0128 17:54:43.976945 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" podStartSLOduration=1.423171651 podStartE2EDuration="1.976922908s" podCreationTimestamp="2026-01-28 17:54:42 +0000 UTC" firstStartedPulling="2026-01-28 17:54:42.929295322 +0000 UTC m=+7755.205266833" lastFinishedPulling="2026-01-28 17:54:43.483046579 +0000 UTC m=+7755.759018090" observedRunningTime="2026-01-28 17:54:43.974519592 +0000 UTC m=+7756.250491143" watchObservedRunningTime="2026-01-28 17:54:43.976922908 +0000 UTC m=+7756.252894439" Jan 28 17:54:56 crc kubenswrapper[4903]: I0128 17:54:56.613778 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:54:56 crc kubenswrapper[4903]: I0128 17:54:56.614233 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:55:26 crc kubenswrapper[4903]: I0128 17:55:26.355003 4903 generic.go:334] "Generic (PLEG): container finished" podID="c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e" containerID="12a957bc4c5f688222e5617c110495e1425ea1f57890cc0385bc31332053e2e1" exitCode=0 Jan 28 17:55:26 crc kubenswrapper[4903]: I0128 17:55:26.355099 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" event={"ID":"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e","Type":"ContainerDied","Data":"12a957bc4c5f688222e5617c110495e1425ea1f57890cc0385bc31332053e2e1"} Jan 28 17:55:26 crc kubenswrapper[4903]: I0128 17:55:26.613834 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:55:26 crc kubenswrapper[4903]: I0128 17:55:26.614304 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:55:27 crc kubenswrapper[4903]: I0128 17:55:27.907010 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.034976 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-inventory\") pod \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.035203 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlf4w\" (UniqueName: \"kubernetes.io/projected/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-kube-api-access-hlf4w\") pod \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.035413 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-ssh-key-openstack-cell1\") pod \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\" (UID: \"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e\") " Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.049861 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-kube-api-access-hlf4w" (OuterVolumeSpecName: "kube-api-access-hlf4w") pod "c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e" (UID: "c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e"). InnerVolumeSpecName "kube-api-access-hlf4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.067984 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e" (UID: "c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.068397 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-inventory" (OuterVolumeSpecName: "inventory") pod "c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e" (UID: "c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.138431 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.138886 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.138898 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlf4w\" (UniqueName: \"kubernetes.io/projected/c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e-kube-api-access-hlf4w\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.386722 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" event={"ID":"c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e","Type":"ContainerDied","Data":"da5d1d6f0820c2ecb0dac572e690c69553045af663f27a1b824fc25aa39f2c31"} Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.386790 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5d1d6f0820c2ecb0dac572e690c69553045af663f27a1b824fc25aa39f2c31" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.386870 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-5vjln" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.473140 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-bmhqz"] Jan 28 17:55:28 crc kubenswrapper[4903]: E0128 17:55:28.473649 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e" containerName="configure-os-openstack-openstack-cell1" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.473668 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e" containerName="configure-os-openstack-openstack-cell1" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.473905 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a4b7fd-d2b8-4feb-8924-1e09d30d2c3e" containerName="configure-os-openstack-openstack-cell1" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.474635 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.478704 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.478790 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.478902 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.478911 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.493400 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-bmhqz"] Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.547587 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.547660 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-inventory-0\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.547693 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dmr\" (UniqueName: \"kubernetes.io/projected/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-kube-api-access-b4dmr\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.649999 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.650082 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-inventory-0\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.650128 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dmr\" (UniqueName: \"kubernetes.io/projected/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-kube-api-access-b4dmr\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.655605 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-inventory-0\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.668944 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.673400 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dmr\" (UniqueName: \"kubernetes.io/projected/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-kube-api-access-b4dmr\") pod \"ssh-known-hosts-openstack-bmhqz\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:28 crc kubenswrapper[4903]: I0128 17:55:28.793217 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:29 crc kubenswrapper[4903]: I0128 17:55:29.393570 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-bmhqz"] Jan 28 17:55:30 crc kubenswrapper[4903]: I0128 17:55:30.449137 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-bmhqz" event={"ID":"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc","Type":"ContainerStarted","Data":"5084dafb863349be6b136aa1332dd4f935f028a46e4d325fcffa8394c1ae4117"} Jan 28 17:55:31 crc kubenswrapper[4903]: I0128 17:55:31.432250 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-bmhqz" event={"ID":"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc","Type":"ContainerStarted","Data":"df3de3504df38a3fa8f3917ac4b6cddd87ff2e87bee966d4583b0d65213c0b1f"} Jan 28 17:55:31 crc kubenswrapper[4903]: I0128 17:55:31.469745 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-bmhqz" podStartSLOduration=2.474184356 podStartE2EDuration="3.469725787s" podCreationTimestamp="2026-01-28 17:55:28 +0000 UTC" firstStartedPulling="2026-01-28 17:55:29.398889493 +0000 UTC m=+7801.674861004" lastFinishedPulling="2026-01-28 17:55:30.394430894 +0000 UTC m=+7802.670402435" observedRunningTime="2026-01-28 17:55:31.454378527 +0000 UTC m=+7803.730350038" watchObservedRunningTime="2026-01-28 17:55:31.469725787 +0000 UTC m=+7803.745697318" Jan 28 17:55:39 crc kubenswrapper[4903]: I0128 17:55:39.512713 4903 generic.go:334] "Generic (PLEG): container finished" podID="2905c4d7-d4f2-49ef-bb86-01338c5a2ccc" containerID="df3de3504df38a3fa8f3917ac4b6cddd87ff2e87bee966d4583b0d65213c0b1f" exitCode=0 Jan 28 17:55:39 crc kubenswrapper[4903]: I0128 17:55:39.512794 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-bmhqz" event={"ID":"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc","Type":"ContainerDied","Data":"df3de3504df38a3fa8f3917ac4b6cddd87ff2e87bee966d4583b0d65213c0b1f"} Jan 28 17:55:40 crc kubenswrapper[4903]: I0128 17:55:40.937728 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.071684 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-inventory-0\") pod \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.072095 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4dmr\" (UniqueName: \"kubernetes.io/projected/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-kube-api-access-b4dmr\") pod \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.072183 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-ssh-key-openstack-cell1\") pod \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\" (UID: \"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc\") " Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.077508 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-kube-api-access-b4dmr" (OuterVolumeSpecName: "kube-api-access-b4dmr") pod "2905c4d7-d4f2-49ef-bb86-01338c5a2ccc" (UID: "2905c4d7-d4f2-49ef-bb86-01338c5a2ccc"). InnerVolumeSpecName "kube-api-access-b4dmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.104873 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2905c4d7-d4f2-49ef-bb86-01338c5a2ccc" (UID: "2905c4d7-d4f2-49ef-bb86-01338c5a2ccc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.118988 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2905c4d7-d4f2-49ef-bb86-01338c5a2ccc" (UID: "2905c4d7-d4f2-49ef-bb86-01338c5a2ccc"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.175392 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4dmr\" (UniqueName: \"kubernetes.io/projected/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-kube-api-access-b4dmr\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.175431 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.175442 4903 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2905c4d7-d4f2-49ef-bb86-01338c5a2ccc-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.539984 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-bmhqz" event={"ID":"2905c4d7-d4f2-49ef-bb86-01338c5a2ccc","Type":"ContainerDied","Data":"5084dafb863349be6b136aa1332dd4f935f028a46e4d325fcffa8394c1ae4117"} Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.540029 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5084dafb863349be6b136aa1332dd4f935f028a46e4d325fcffa8394c1ae4117" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.540144 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-bmhqz" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.614693 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-mmrxx"] Jan 28 17:55:41 crc kubenswrapper[4903]: E0128 17:55:41.615177 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2905c4d7-d4f2-49ef-bb86-01338c5a2ccc" containerName="ssh-known-hosts-openstack" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.615199 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="2905c4d7-d4f2-49ef-bb86-01338c5a2ccc" containerName="ssh-known-hosts-openstack" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.615474 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="2905c4d7-d4f2-49ef-bb86-01338c5a2ccc" containerName="ssh-known-hosts-openstack" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.616372 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.619425 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.619568 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.619433 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.619918 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.630651 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-mmrxx"] Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.789040 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-inventory\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.789341 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t9z5\" (UniqueName: \"kubernetes.io/projected/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-kube-api-access-5t9z5\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.789458 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.892021 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.892263 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-inventory\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.892285 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t9z5\" (UniqueName: \"kubernetes.io/projected/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-kube-api-access-5t9z5\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.896131 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.896620 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-inventory\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.909082 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t9z5\" (UniqueName: \"kubernetes.io/projected/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-kube-api-access-5t9z5\") pod \"run-os-openstack-openstack-cell1-mmrxx\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:41 crc kubenswrapper[4903]: I0128 17:55:41.938179 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:42 crc kubenswrapper[4903]: I0128 17:55:42.537313 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-mmrxx"] Jan 28 17:55:42 crc kubenswrapper[4903]: W0128 17:55:42.550458 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cababf9_16e9_41d6_b813_8f0f1acb4d5e.slice/crio-246944d2a47257d89237fc23bd55e4e5fbf660ef7452aececcc6b8fe2572d7d8 WatchSource:0}: Error finding container 246944d2a47257d89237fc23bd55e4e5fbf660ef7452aececcc6b8fe2572d7d8: Status 404 returned error can't find the container with id 246944d2a47257d89237fc23bd55e4e5fbf660ef7452aececcc6b8fe2572d7d8 Jan 28 17:55:43 crc kubenswrapper[4903]: I0128 17:55:43.568672 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" event={"ID":"7cababf9-16e9-41d6-b813-8f0f1acb4d5e","Type":"ContainerStarted","Data":"3ccb037b774da41ec56514d0786e2a0b5d2acf05430bd9d89cab59c7a29d6d12"} Jan 28 17:55:43 crc kubenswrapper[4903]: I0128 17:55:43.568724 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" event={"ID":"7cababf9-16e9-41d6-b813-8f0f1acb4d5e","Type":"ContainerStarted","Data":"246944d2a47257d89237fc23bd55e4e5fbf660ef7452aececcc6b8fe2572d7d8"} Jan 28 17:55:43 crc kubenswrapper[4903]: I0128 17:55:43.602869 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" podStartSLOduration=2.1424818979999998 podStartE2EDuration="2.602849029s" podCreationTimestamp="2026-01-28 17:55:41 +0000 UTC" firstStartedPulling="2026-01-28 17:55:42.556836538 +0000 UTC m=+7814.832808059" lastFinishedPulling="2026-01-28 17:55:43.017203649 +0000 UTC m=+7815.293175190" observedRunningTime="2026-01-28 17:55:43.589482493 +0000 UTC m=+7815.865454044" watchObservedRunningTime="2026-01-28 17:55:43.602849029 +0000 UTC m=+7815.878820550" Jan 28 17:55:53 crc kubenswrapper[4903]: I0128 17:55:53.668222 4903 generic.go:334] "Generic (PLEG): container finished" podID="7cababf9-16e9-41d6-b813-8f0f1acb4d5e" containerID="3ccb037b774da41ec56514d0786e2a0b5d2acf05430bd9d89cab59c7a29d6d12" exitCode=0 Jan 28 17:55:53 crc kubenswrapper[4903]: I0128 17:55:53.668312 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" event={"ID":"7cababf9-16e9-41d6-b813-8f0f1acb4d5e","Type":"ContainerDied","Data":"3ccb037b774da41ec56514d0786e2a0b5d2acf05430bd9d89cab59c7a29d6d12"} Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.078657 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.192257 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t9z5\" (UniqueName: \"kubernetes.io/projected/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-kube-api-access-5t9z5\") pod \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.192427 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-inventory\") pod \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.192662 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-ssh-key-openstack-cell1\") pod \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\" (UID: \"7cababf9-16e9-41d6-b813-8f0f1acb4d5e\") " Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.200116 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-kube-api-access-5t9z5" (OuterVolumeSpecName: "kube-api-access-5t9z5") pod "7cababf9-16e9-41d6-b813-8f0f1acb4d5e" (UID: "7cababf9-16e9-41d6-b813-8f0f1acb4d5e"). InnerVolumeSpecName "kube-api-access-5t9z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.223507 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7cababf9-16e9-41d6-b813-8f0f1acb4d5e" (UID: "7cababf9-16e9-41d6-b813-8f0f1acb4d5e"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.223955 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-inventory" (OuterVolumeSpecName: "inventory") pod "7cababf9-16e9-41d6-b813-8f0f1acb4d5e" (UID: "7cababf9-16e9-41d6-b813-8f0f1acb4d5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.296792 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.296833 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.296847 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t9z5\" (UniqueName: \"kubernetes.io/projected/7cababf9-16e9-41d6-b813-8f0f1acb4d5e-kube-api-access-5t9z5\") on node \"crc\" DevicePath \"\"" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.687769 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" event={"ID":"7cababf9-16e9-41d6-b813-8f0f1acb4d5e","Type":"ContainerDied","Data":"246944d2a47257d89237fc23bd55e4e5fbf660ef7452aececcc6b8fe2572d7d8"} Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.687822 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246944d2a47257d89237fc23bd55e4e5fbf660ef7452aececcc6b8fe2572d7d8" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.687836 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mmrxx" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.763511 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-mf8mz"] Jan 28 17:55:55 crc kubenswrapper[4903]: E0128 17:55:55.763965 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cababf9-16e9-41d6-b813-8f0f1acb4d5e" containerName="run-os-openstack-openstack-cell1" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.763983 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cababf9-16e9-41d6-b813-8f0f1acb4d5e" containerName="run-os-openstack-openstack-cell1" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.764173 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cababf9-16e9-41d6-b813-8f0f1acb4d5e" containerName="run-os-openstack-openstack-cell1" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.765181 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.767108 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.767220 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.767281 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.773270 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.781768 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-mf8mz"] Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.909186 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5k76\" (UniqueName: \"kubernetes.io/projected/ce7fafd2-47b5-47fd-9073-3ee4658e764d-kube-api-access-k5k76\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.909485 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-inventory\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:55 crc kubenswrapper[4903]: I0128 17:55:55.909629 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.011896 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-inventory\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.011949 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.012120 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5k76\" (UniqueName: \"kubernetes.io/projected/ce7fafd2-47b5-47fd-9073-3ee4658e764d-kube-api-access-k5k76\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.025702 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-inventory\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.026140 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.027944 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5k76\" (UniqueName: \"kubernetes.io/projected/ce7fafd2-47b5-47fd-9073-3ee4658e764d-kube-api-access-k5k76\") pod \"reboot-os-openstack-openstack-cell1-mf8mz\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.084351 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.614219 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.614611 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.614658 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.615488 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.615594 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" gracePeriod=600 Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.653715 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-mf8mz"] Jan 28 17:55:56 crc kubenswrapper[4903]: I0128 17:55:56.697852 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" event={"ID":"ce7fafd2-47b5-47fd-9073-3ee4658e764d","Type":"ContainerStarted","Data":"6b14e88f7ce12be67e07863cba8f25157ca1a7f07fc0aa4e30708bf714a379b6"} Jan 28 17:55:56 crc kubenswrapper[4903]: E0128 17:55:56.740127 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:55:57 crc kubenswrapper[4903]: I0128 17:55:57.709432 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" exitCode=0 Jan 28 17:55:57 crc kubenswrapper[4903]: I0128 17:55:57.709523 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed"} Jan 28 17:55:57 crc kubenswrapper[4903]: I0128 17:55:57.710089 4903 scope.go:117] "RemoveContainer" containerID="8e6802290420c6d59f256a6272c07630904cfdec2373baad19af691305312c46" Jan 28 17:55:57 crc kubenswrapper[4903]: I0128 17:55:57.710934 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:55:57 crc kubenswrapper[4903]: E0128 17:55:57.711211 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:55:57 crc kubenswrapper[4903]: I0128 17:55:57.713561 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" event={"ID":"ce7fafd2-47b5-47fd-9073-3ee4658e764d","Type":"ContainerStarted","Data":"c7bd195e3ef1abaad0cdf76bdd8c1f9226dcf9b57e416fc5c3da7eb2181f4fa4"} Jan 28 17:55:57 crc kubenswrapper[4903]: I0128 17:55:57.760632 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" podStartSLOduration=2.312504086 podStartE2EDuration="2.760609254s" podCreationTimestamp="2026-01-28 17:55:55 +0000 UTC" firstStartedPulling="2026-01-28 17:55:56.661351485 +0000 UTC m=+7828.937322996" lastFinishedPulling="2026-01-28 17:55:57.109456653 +0000 UTC m=+7829.385428164" observedRunningTime="2026-01-28 17:55:57.750834647 +0000 UTC m=+7830.026806188" watchObservedRunningTime="2026-01-28 17:55:57.760609254 +0000 UTC m=+7830.036580765" Jan 28 17:56:09 crc kubenswrapper[4903]: I0128 17:56:09.413706 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:56:09 crc kubenswrapper[4903]: E0128 17:56:09.414568 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:56:12 crc kubenswrapper[4903]: I0128 17:56:12.857732 4903 generic.go:334] "Generic (PLEG): container finished" podID="ce7fafd2-47b5-47fd-9073-3ee4658e764d" containerID="c7bd195e3ef1abaad0cdf76bdd8c1f9226dcf9b57e416fc5c3da7eb2181f4fa4" exitCode=0 Jan 28 17:56:12 crc kubenswrapper[4903]: I0128 17:56:12.857951 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" event={"ID":"ce7fafd2-47b5-47fd-9073-3ee4658e764d","Type":"ContainerDied","Data":"c7bd195e3ef1abaad0cdf76bdd8c1f9226dcf9b57e416fc5c3da7eb2181f4fa4"} Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.268910 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.433735 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-inventory\") pod \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.433821 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5k76\" (UniqueName: \"kubernetes.io/projected/ce7fafd2-47b5-47fd-9073-3ee4658e764d-kube-api-access-k5k76\") pod \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.433902 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-ssh-key-openstack-cell1\") pod \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\" (UID: \"ce7fafd2-47b5-47fd-9073-3ee4658e764d\") " Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.439207 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7fafd2-47b5-47fd-9073-3ee4658e764d-kube-api-access-k5k76" (OuterVolumeSpecName: "kube-api-access-k5k76") pod "ce7fafd2-47b5-47fd-9073-3ee4658e764d" (UID: "ce7fafd2-47b5-47fd-9073-3ee4658e764d"). InnerVolumeSpecName "kube-api-access-k5k76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.464309 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-inventory" (OuterVolumeSpecName: "inventory") pod "ce7fafd2-47b5-47fd-9073-3ee4658e764d" (UID: "ce7fafd2-47b5-47fd-9073-3ee4658e764d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.467233 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ce7fafd2-47b5-47fd-9073-3ee4658e764d" (UID: "ce7fafd2-47b5-47fd-9073-3ee4658e764d"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.536480 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.536554 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5k76\" (UniqueName: \"kubernetes.io/projected/ce7fafd2-47b5-47fd-9073-3ee4658e764d-kube-api-access-k5k76\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.536569 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ce7fafd2-47b5-47fd-9073-3ee4658e764d-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.878574 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" event={"ID":"ce7fafd2-47b5-47fd-9073-3ee4658e764d","Type":"ContainerDied","Data":"6b14e88f7ce12be67e07863cba8f25157ca1a7f07fc0aa4e30708bf714a379b6"} Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.878876 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b14e88f7ce12be67e07863cba8f25157ca1a7f07fc0aa4e30708bf714a379b6" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.878677 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-mf8mz" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.956320 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-5rvcl"] Jan 28 17:56:14 crc kubenswrapper[4903]: E0128 17:56:14.956781 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7fafd2-47b5-47fd-9073-3ee4658e764d" containerName="reboot-os-openstack-openstack-cell1" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.956799 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7fafd2-47b5-47fd-9073-3ee4658e764d" containerName="reboot-os-openstack-openstack-cell1" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.956991 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7fafd2-47b5-47fd-9073-3ee4658e764d" containerName="reboot-os-openstack-openstack-cell1" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.957692 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.960684 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.961118 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.961294 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.961453 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.961622 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.961786 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.962000 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.963515 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:56:14 crc kubenswrapper[4903]: I0128 17:56:14.986875 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-5rvcl"] Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.148903 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.149279 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.149386 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.149546 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.149691 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.149816 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.149931 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150034 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgjz\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-kube-api-access-wzgjz\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150185 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150221 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150240 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150273 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150323 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150561 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-inventory\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.150662 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252291 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252374 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252407 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252455 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252483 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252512 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252583 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252607 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgjz\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-kube-api-access-wzgjz\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252630 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252650 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252666 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252692 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252734 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252800 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-inventory\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.252822 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.257789 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.259966 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.260184 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.260485 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.262974 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.264144 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.264202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.264316 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.264512 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.264658 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.264792 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.265865 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.266002 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-inventory\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.266674 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.271857 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgjz\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-kube-api-access-wzgjz\") pod \"install-certs-openstack-openstack-cell1-5rvcl\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.273902 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.785484 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-5rvcl"] Jan 28 17:56:15 crc kubenswrapper[4903]: I0128 17:56:15.888816 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" event={"ID":"1ba63562-71f4-4fb6-891a-9ef5ef522a81","Type":"ContainerStarted","Data":"0a58a2b909ae00fb9775e4d17692ff556fabc5a30a9e4e0b405a1ccc6b26c10b"} Jan 28 17:56:16 crc kubenswrapper[4903]: I0128 17:56:16.901093 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" event={"ID":"1ba63562-71f4-4fb6-891a-9ef5ef522a81","Type":"ContainerStarted","Data":"3718671f719714213d8ebc1a6adee14ec3867fdce65636c8c4064121a9081bd4"} Jan 28 17:56:16 crc kubenswrapper[4903]: I0128 17:56:16.942232 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" podStartSLOduration=2.487664083 podStartE2EDuration="2.942207466s" podCreationTimestamp="2026-01-28 17:56:14 +0000 UTC" firstStartedPulling="2026-01-28 17:56:15.792332473 +0000 UTC m=+7848.068303984" lastFinishedPulling="2026-01-28 17:56:16.246875856 +0000 UTC m=+7848.522847367" observedRunningTime="2026-01-28 17:56:16.927198345 +0000 UTC m=+7849.203169876" watchObservedRunningTime="2026-01-28 17:56:16.942207466 +0000 UTC m=+7849.218178987" Jan 28 17:56:23 crc kubenswrapper[4903]: I0128 17:56:23.414076 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:56:23 crc kubenswrapper[4903]: E0128 17:56:23.415062 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:56:37 crc kubenswrapper[4903]: I0128 17:56:37.414704 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:56:37 crc kubenswrapper[4903]: E0128 17:56:37.416408 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:56:48 crc kubenswrapper[4903]: I0128 17:56:48.449611 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:56:48 crc kubenswrapper[4903]: E0128 17:56:48.450407 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:56:53 crc kubenswrapper[4903]: I0128 17:56:53.247433 4903 generic.go:334] "Generic (PLEG): container finished" podID="1ba63562-71f4-4fb6-891a-9ef5ef522a81" containerID="3718671f719714213d8ebc1a6adee14ec3867fdce65636c8c4064121a9081bd4" exitCode=0 Jan 28 17:56:53 crc kubenswrapper[4903]: I0128 17:56:53.247559 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" event={"ID":"1ba63562-71f4-4fb6-891a-9ef5ef522a81","Type":"ContainerDied","Data":"3718671f719714213d8ebc1a6adee14ec3867fdce65636c8c4064121a9081bd4"} Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.771995 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947383 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-bootstrap-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947701 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-telemetry-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947767 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-telemetry-default-certs-0\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947799 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-dhcp-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947837 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-libvirt-default-certs-0\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947881 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-metadata-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947896 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ssh-key-openstack-cell1\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947930 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-nova-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947959 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-ovn-default-certs-0\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.947990 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ovn-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.948037 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-sriov-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.948068 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-neutron-metadata-default-certs-0\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.948101 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzgjz\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-kube-api-access-wzgjz\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.948318 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-inventory\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.948379 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-libvirt-combined-ca-bundle\") pod \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\" (UID: \"1ba63562-71f4-4fb6-891a-9ef5ef522a81\") " Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.956113 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.956233 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.956509 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957082 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957219 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957239 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957248 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957261 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957208 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957324 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957096 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.957966 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-kube-api-access-wzgjz" (OuterVolumeSpecName: "kube-api-access-wzgjz") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "kube-api-access-wzgjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.958843 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.981496 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:54 crc kubenswrapper[4903]: I0128 17:56:54.987493 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-inventory" (OuterVolumeSpecName: "inventory") pod "1ba63562-71f4-4fb6-891a-9ef5ef522a81" (UID: "1ba63562-71f4-4fb6-891a-9ef5ef522a81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051222 4903 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051256 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051269 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051280 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051289 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051298 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051307 4903 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051315 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051325 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051335 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051344 4903 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051354 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzgjz\" (UniqueName: \"kubernetes.io/projected/1ba63562-71f4-4fb6-891a-9ef5ef522a81-kube-api-access-wzgjz\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051363 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051373 4903 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.051381 4903 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba63562-71f4-4fb6-891a-9ef5ef522a81-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.270772 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" event={"ID":"1ba63562-71f4-4fb6-891a-9ef5ef522a81","Type":"ContainerDied","Data":"0a58a2b909ae00fb9775e4d17692ff556fabc5a30a9e4e0b405a1ccc6b26c10b"} Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.270817 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a58a2b909ae00fb9775e4d17692ff556fabc5a30a9e4e0b405a1ccc6b26c10b" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.270886 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-5rvcl" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.429508 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-jjcs5"] Jan 28 17:56:55 crc kubenswrapper[4903]: E0128 17:56:55.430089 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba63562-71f4-4fb6-891a-9ef5ef522a81" containerName="install-certs-openstack-openstack-cell1" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.430122 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba63562-71f4-4fb6-891a-9ef5ef522a81" containerName="install-certs-openstack-openstack-cell1" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.430493 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba63562-71f4-4fb6-891a-9ef5ef522a81" containerName="install-certs-openstack-openstack-cell1" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.431261 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.433027 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.433433 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.434732 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.435007 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.437960 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.453580 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-jjcs5"] Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.563012 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.563184 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/04063c1b-379e-488a-9650-909da1cc7b99-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.563210 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.563232 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv25s\" (UniqueName: \"kubernetes.io/projected/04063c1b-379e-488a-9650-909da1cc7b99-kube-api-access-nv25s\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.563265 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-inventory\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.664619 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/04063c1b-379e-488a-9650-909da1cc7b99-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.664884 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.664922 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv25s\" (UniqueName: \"kubernetes.io/projected/04063c1b-379e-488a-9650-909da1cc7b99-kube-api-access-nv25s\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.664964 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-inventory\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.665047 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.665491 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/04063c1b-379e-488a-9650-909da1cc7b99-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.670102 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-inventory\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.672398 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.673252 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.681077 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv25s\" (UniqueName: \"kubernetes.io/projected/04063c1b-379e-488a-9650-909da1cc7b99-kube-api-access-nv25s\") pod \"ovn-openstack-openstack-cell1-jjcs5\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:55 crc kubenswrapper[4903]: I0128 17:56:55.750358 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:56:56 crc kubenswrapper[4903]: I0128 17:56:56.332744 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-jjcs5"] Jan 28 17:56:57 crc kubenswrapper[4903]: I0128 17:56:57.290052 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" event={"ID":"04063c1b-379e-488a-9650-909da1cc7b99","Type":"ContainerStarted","Data":"c12719f0222966874f3383d18ae6ea0570fcb3b6f545b6950c63ff8f2dc0109a"} Jan 28 17:56:59 crc kubenswrapper[4903]: I0128 17:56:59.310007 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" event={"ID":"04063c1b-379e-488a-9650-909da1cc7b99","Type":"ContainerStarted","Data":"331c2f4f4ab7aeaf469f1020f927ed6133819a90abe3058043df246d9071c846"} Jan 28 17:56:59 crc kubenswrapper[4903]: I0128 17:56:59.328412 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" podStartSLOduration=2.549278668 podStartE2EDuration="4.328388322s" podCreationTimestamp="2026-01-28 17:56:55 +0000 UTC" firstStartedPulling="2026-01-28 17:56:56.333198264 +0000 UTC m=+7888.609169775" lastFinishedPulling="2026-01-28 17:56:58.112307918 +0000 UTC m=+7890.388279429" observedRunningTime="2026-01-28 17:56:59.325841392 +0000 UTC m=+7891.601812903" watchObservedRunningTime="2026-01-28 17:56:59.328388322 +0000 UTC m=+7891.604359853" Jan 28 17:57:02 crc kubenswrapper[4903]: I0128 17:57:02.414078 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:57:02 crc kubenswrapper[4903]: E0128 17:57:02.414954 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:57:14 crc kubenswrapper[4903]: I0128 17:57:14.413939 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:57:14 crc kubenswrapper[4903]: E0128 17:57:14.414871 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:57:26 crc kubenswrapper[4903]: I0128 17:57:26.414149 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:57:26 crc kubenswrapper[4903]: E0128 17:57:26.415223 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:57:39 crc kubenswrapper[4903]: I0128 17:57:39.414770 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:57:39 crc kubenswrapper[4903]: E0128 17:57:39.416051 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:57:52 crc kubenswrapper[4903]: I0128 17:57:52.413632 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:57:52 crc kubenswrapper[4903]: E0128 17:57:52.414784 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:57:59 crc kubenswrapper[4903]: I0128 17:57:59.907357 4903 generic.go:334] "Generic (PLEG): container finished" podID="04063c1b-379e-488a-9650-909da1cc7b99" containerID="331c2f4f4ab7aeaf469f1020f927ed6133819a90abe3058043df246d9071c846" exitCode=0 Jan 28 17:57:59 crc kubenswrapper[4903]: I0128 17:57:59.907444 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" event={"ID":"04063c1b-379e-488a-9650-909da1cc7b99","Type":"ContainerDied","Data":"331c2f4f4ab7aeaf469f1020f927ed6133819a90abe3058043df246d9071c846"} Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.375578 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.475627 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-inventory\") pod \"04063c1b-379e-488a-9650-909da1cc7b99\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.475892 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ovn-combined-ca-bundle\") pod \"04063c1b-379e-488a-9650-909da1cc7b99\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.475939 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv25s\" (UniqueName: \"kubernetes.io/projected/04063c1b-379e-488a-9650-909da1cc7b99-kube-api-access-nv25s\") pod \"04063c1b-379e-488a-9650-909da1cc7b99\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.475985 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/04063c1b-379e-488a-9650-909da1cc7b99-ovncontroller-config-0\") pod \"04063c1b-379e-488a-9650-909da1cc7b99\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.476023 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ssh-key-openstack-cell1\") pod \"04063c1b-379e-488a-9650-909da1cc7b99\" (UID: \"04063c1b-379e-488a-9650-909da1cc7b99\") " Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.482901 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04063c1b-379e-488a-9650-909da1cc7b99-kube-api-access-nv25s" (OuterVolumeSpecName: "kube-api-access-nv25s") pod "04063c1b-379e-488a-9650-909da1cc7b99" (UID: "04063c1b-379e-488a-9650-909da1cc7b99"). InnerVolumeSpecName "kube-api-access-nv25s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.486895 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "04063c1b-379e-488a-9650-909da1cc7b99" (UID: "04063c1b-379e-488a-9650-909da1cc7b99"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.512119 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04063c1b-379e-488a-9650-909da1cc7b99-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "04063c1b-379e-488a-9650-909da1cc7b99" (UID: "04063c1b-379e-488a-9650-909da1cc7b99"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.523658 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-inventory" (OuterVolumeSpecName: "inventory") pod "04063c1b-379e-488a-9650-909da1cc7b99" (UID: "04063c1b-379e-488a-9650-909da1cc7b99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.528006 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "04063c1b-379e-488a-9650-909da1cc7b99" (UID: "04063c1b-379e-488a-9650-909da1cc7b99"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.579348 4903 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.579377 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv25s\" (UniqueName: \"kubernetes.io/projected/04063c1b-379e-488a-9650-909da1cc7b99-kube-api-access-nv25s\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.579387 4903 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/04063c1b-379e-488a-9650-909da1cc7b99-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.579397 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.579407 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04063c1b-379e-488a-9650-909da1cc7b99-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.929179 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" event={"ID":"04063c1b-379e-488a-9650-909da1cc7b99","Type":"ContainerDied","Data":"c12719f0222966874f3383d18ae6ea0570fcb3b6f545b6950c63ff8f2dc0109a"} Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.929228 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12719f0222966874f3383d18ae6ea0570fcb3b6f545b6950c63ff8f2dc0109a" Jan 28 17:58:01 crc kubenswrapper[4903]: I0128 17:58:01.929325 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-jjcs5" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.046886 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-pt55w"] Jan 28 17:58:02 crc kubenswrapper[4903]: E0128 17:58:02.047353 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04063c1b-379e-488a-9650-909da1cc7b99" containerName="ovn-openstack-openstack-cell1" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.047372 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="04063c1b-379e-488a-9650-909da1cc7b99" containerName="ovn-openstack-openstack-cell1" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.047647 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="04063c1b-379e-488a-9650-909da1cc7b99" containerName="ovn-openstack-openstack-cell1" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.048650 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.051048 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.051059 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.051453 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.051642 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.053238 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.058455 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.073120 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-pt55w"] Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.199151 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklp4\" (UniqueName: \"kubernetes.io/projected/e68e3115-552c-473f-a082-092b794ba4cd-kube-api-access-nklp4\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.199215 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.199636 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.199697 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.199785 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.199922 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.302576 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.302639 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.302668 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.302719 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.302790 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklp4\" (UniqueName: \"kubernetes.io/projected/e68e3115-552c-473f-a082-092b794ba4cd-kube-api-access-nklp4\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.302832 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.308771 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.312736 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.314156 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.314496 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.314678 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.330646 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklp4\" (UniqueName: \"kubernetes.io/projected/e68e3115-552c-473f-a082-092b794ba4cd-kube-api-access-nklp4\") pod \"neutron-metadata-openstack-openstack-cell1-pt55w\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.371059 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:02 crc kubenswrapper[4903]: I0128 17:58:02.978135 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-pt55w"] Jan 28 17:58:03 crc kubenswrapper[4903]: I0128 17:58:03.949931 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" event={"ID":"e68e3115-552c-473f-a082-092b794ba4cd","Type":"ContainerStarted","Data":"98d9ca520d7b581f5fdc789fc7ae6cf12b03a231aaacec4279540e5c4d91d347"} Jan 28 17:58:04 crc kubenswrapper[4903]: I0128 17:58:04.963335 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" event={"ID":"e68e3115-552c-473f-a082-092b794ba4cd","Type":"ContainerStarted","Data":"3bf8a6a3f55ca62e192bb4a168a445e71f70efc31be2e9d62e9b9d1c7d8ab85b"} Jan 28 17:58:04 crc kubenswrapper[4903]: I0128 17:58:04.990335 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" podStartSLOduration=1.8311695019999998 podStartE2EDuration="2.990311458s" podCreationTimestamp="2026-01-28 17:58:02 +0000 UTC" firstStartedPulling="2026-01-28 17:58:02.97889173 +0000 UTC m=+7955.254863241" lastFinishedPulling="2026-01-28 17:58:04.138033686 +0000 UTC m=+7956.414005197" observedRunningTime="2026-01-28 17:58:04.979805781 +0000 UTC m=+7957.255777292" watchObservedRunningTime="2026-01-28 17:58:04.990311458 +0000 UTC m=+7957.266282959" Jan 28 17:58:06 crc kubenswrapper[4903]: I0128 17:58:06.413107 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:58:06 crc kubenswrapper[4903]: E0128 17:58:06.413778 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:58:18 crc kubenswrapper[4903]: I0128 17:58:18.421160 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:58:18 crc kubenswrapper[4903]: E0128 17:58:18.422097 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.555168 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fzz9t"] Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.566782 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.595364 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzz9t"] Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.641997 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-catalog-content\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.642106 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-utilities\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.642150 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzwn\" (UniqueName: \"kubernetes.io/projected/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-kube-api-access-8bzwn\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.743840 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-catalog-content\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.743931 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-utilities\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.743970 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bzwn\" (UniqueName: \"kubernetes.io/projected/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-kube-api-access-8bzwn\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.744433 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-catalog-content\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.744727 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-utilities\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.768950 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bzwn\" (UniqueName: \"kubernetes.io/projected/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-kube-api-access-8bzwn\") pod \"redhat-marketplace-fzz9t\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:25 crc kubenswrapper[4903]: I0128 17:58:25.922018 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:26 crc kubenswrapper[4903]: I0128 17:58:26.490960 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzz9t"] Jan 28 17:58:27 crc kubenswrapper[4903]: I0128 17:58:27.201016 4903 generic.go:334] "Generic (PLEG): container finished" podID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerID="58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c" exitCode=0 Jan 28 17:58:27 crc kubenswrapper[4903]: I0128 17:58:27.201265 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzz9t" event={"ID":"833a5638-5a12-4b8d-9ca2-d2f2e87c861b","Type":"ContainerDied","Data":"58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c"} Jan 28 17:58:27 crc kubenswrapper[4903]: I0128 17:58:27.201348 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzz9t" event={"ID":"833a5638-5a12-4b8d-9ca2-d2f2e87c861b","Type":"ContainerStarted","Data":"f3d3c4742ae2b3b4e40b4fc300ad4ecce7f76e91ede09425997f8eb79b7b9985"} Jan 28 17:58:29 crc kubenswrapper[4903]: I0128 17:58:29.221081 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzz9t" event={"ID":"833a5638-5a12-4b8d-9ca2-d2f2e87c861b","Type":"ContainerStarted","Data":"3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5"} Jan 28 17:58:31 crc kubenswrapper[4903]: I0128 17:58:31.244043 4903 generic.go:334] "Generic (PLEG): container finished" podID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerID="3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5" exitCode=0 Jan 28 17:58:31 crc kubenswrapper[4903]: I0128 17:58:31.244126 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzz9t" event={"ID":"833a5638-5a12-4b8d-9ca2-d2f2e87c861b","Type":"ContainerDied","Data":"3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5"} Jan 28 17:58:32 crc kubenswrapper[4903]: I0128 17:58:32.414047 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:58:32 crc kubenswrapper[4903]: E0128 17:58:32.415282 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:58:33 crc kubenswrapper[4903]: I0128 17:58:33.265430 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzz9t" event={"ID":"833a5638-5a12-4b8d-9ca2-d2f2e87c861b","Type":"ContainerStarted","Data":"6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6"} Jan 28 17:58:33 crc kubenswrapper[4903]: I0128 17:58:33.287791 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fzz9t" podStartSLOduration=3.095155526 podStartE2EDuration="8.287762501s" podCreationTimestamp="2026-01-28 17:58:25 +0000 UTC" firstStartedPulling="2026-01-28 17:58:27.203841215 +0000 UTC m=+7979.479812726" lastFinishedPulling="2026-01-28 17:58:32.39644819 +0000 UTC m=+7984.672419701" observedRunningTime="2026-01-28 17:58:33.28626281 +0000 UTC m=+7985.562234341" watchObservedRunningTime="2026-01-28 17:58:33.287762501 +0000 UTC m=+7985.563734032" Jan 28 17:58:35 crc kubenswrapper[4903]: I0128 17:58:35.922790 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:35 crc kubenswrapper[4903]: I0128 17:58:35.923195 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:36 crc kubenswrapper[4903]: I0128 17:58:36.001122 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:45 crc kubenswrapper[4903]: I0128 17:58:45.413120 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:58:45 crc kubenswrapper[4903]: E0128 17:58:45.414885 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:58:45 crc kubenswrapper[4903]: I0128 17:58:45.979659 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:46 crc kubenswrapper[4903]: I0128 17:58:46.039928 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzz9t"] Jan 28 17:58:46 crc kubenswrapper[4903]: I0128 17:58:46.426023 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fzz9t" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="registry-server" containerID="cri-o://6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6" gracePeriod=2 Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.003683 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.140420 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-catalog-content\") pod \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.140616 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bzwn\" (UniqueName: \"kubernetes.io/projected/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-kube-api-access-8bzwn\") pod \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.140779 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-utilities\") pod \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\" (UID: \"833a5638-5a12-4b8d-9ca2-d2f2e87c861b\") " Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.141502 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-utilities" (OuterVolumeSpecName: "utilities") pod "833a5638-5a12-4b8d-9ca2-d2f2e87c861b" (UID: "833a5638-5a12-4b8d-9ca2-d2f2e87c861b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.149297 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-kube-api-access-8bzwn" (OuterVolumeSpecName: "kube-api-access-8bzwn") pod "833a5638-5a12-4b8d-9ca2-d2f2e87c861b" (UID: "833a5638-5a12-4b8d-9ca2-d2f2e87c861b"). InnerVolumeSpecName "kube-api-access-8bzwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.166851 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "833a5638-5a12-4b8d-9ca2-d2f2e87c861b" (UID: "833a5638-5a12-4b8d-9ca2-d2f2e87c861b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.243619 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.243665 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bzwn\" (UniqueName: \"kubernetes.io/projected/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-kube-api-access-8bzwn\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.243678 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/833a5638-5a12-4b8d-9ca2-d2f2e87c861b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.437096 4903 generic.go:334] "Generic (PLEG): container finished" podID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerID="6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6" exitCode=0 Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.437149 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzz9t" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.437156 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzz9t" event={"ID":"833a5638-5a12-4b8d-9ca2-d2f2e87c861b","Type":"ContainerDied","Data":"6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6"} Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.437202 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzz9t" event={"ID":"833a5638-5a12-4b8d-9ca2-d2f2e87c861b","Type":"ContainerDied","Data":"f3d3c4742ae2b3b4e40b4fc300ad4ecce7f76e91ede09425997f8eb79b7b9985"} Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.437225 4903 scope.go:117] "RemoveContainer" containerID="6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.457336 4903 scope.go:117] "RemoveContainer" containerID="3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.473942 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzz9t"] Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.484595 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzz9t"] Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.493512 4903 scope.go:117] "RemoveContainer" containerID="58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.534230 4903 scope.go:117] "RemoveContainer" containerID="6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6" Jan 28 17:58:47 crc kubenswrapper[4903]: E0128 17:58:47.534752 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6\": container with ID starting with 6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6 not found: ID does not exist" containerID="6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.534800 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6"} err="failed to get container status \"6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6\": rpc error: code = NotFound desc = could not find container \"6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6\": container with ID starting with 6037e31f665175afca3f819d174a61a4442fae4c6704142164de710d984c93e6 not found: ID does not exist" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.534831 4903 scope.go:117] "RemoveContainer" containerID="3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5" Jan 28 17:58:47 crc kubenswrapper[4903]: E0128 17:58:47.535299 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5\": container with ID starting with 3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5 not found: ID does not exist" containerID="3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.535331 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5"} err="failed to get container status \"3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5\": rpc error: code = NotFound desc = could not find container \"3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5\": container with ID starting with 3961de51645b59514b4eda4506b1073917f8a1893942d8f5c92088e8faed92a5 not found: ID does not exist" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.535353 4903 scope.go:117] "RemoveContainer" containerID="58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c" Jan 28 17:58:47 crc kubenswrapper[4903]: E0128 17:58:47.536765 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c\": container with ID starting with 58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c not found: ID does not exist" containerID="58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c" Jan 28 17:58:47 crc kubenswrapper[4903]: I0128 17:58:47.536814 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c"} err="failed to get container status \"58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c\": rpc error: code = NotFound desc = could not find container \"58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c\": container with ID starting with 58c1809fec4e7c58b52a1ec16a6fc7c409493c8d7a99abd5abc52040bd9a083c not found: ID does not exist" Jan 28 17:58:48 crc kubenswrapper[4903]: I0128 17:58:48.424287 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" path="/var/lib/kubelet/pods/833a5638-5a12-4b8d-9ca2-d2f2e87c861b/volumes" Jan 28 17:58:54 crc kubenswrapper[4903]: I0128 17:58:54.517966 4903 generic.go:334] "Generic (PLEG): container finished" podID="e68e3115-552c-473f-a082-092b794ba4cd" containerID="3bf8a6a3f55ca62e192bb4a168a445e71f70efc31be2e9d62e9b9d1c7d8ab85b" exitCode=0 Jan 28 17:58:54 crc kubenswrapper[4903]: I0128 17:58:54.518040 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" event={"ID":"e68e3115-552c-473f-a082-092b794ba4cd","Type":"ContainerDied","Data":"3bf8a6a3f55ca62e192bb4a168a445e71f70efc31be2e9d62e9b9d1c7d8ab85b"} Jan 28 17:58:55 crc kubenswrapper[4903]: I0128 17:58:55.986209 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.076997 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-inventory\") pod \"e68e3115-552c-473f-a082-092b794ba4cd\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.077056 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-ssh-key-openstack-cell1\") pod \"e68e3115-552c-473f-a082-092b794ba4cd\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.077212 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nklp4\" (UniqueName: \"kubernetes.io/projected/e68e3115-552c-473f-a082-092b794ba4cd-kube-api-access-nklp4\") pod \"e68e3115-552c-473f-a082-092b794ba4cd\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.077261 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-metadata-combined-ca-bundle\") pod \"e68e3115-552c-473f-a082-092b794ba4cd\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.077340 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-nova-metadata-neutron-config-0\") pod \"e68e3115-552c-473f-a082-092b794ba4cd\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.077391 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e68e3115-552c-473f-a082-092b794ba4cd\" (UID: \"e68e3115-552c-473f-a082-092b794ba4cd\") " Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.082774 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68e3115-552c-473f-a082-092b794ba4cd-kube-api-access-nklp4" (OuterVolumeSpecName: "kube-api-access-nklp4") pod "e68e3115-552c-473f-a082-092b794ba4cd" (UID: "e68e3115-552c-473f-a082-092b794ba4cd"). InnerVolumeSpecName "kube-api-access-nklp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.082899 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e68e3115-552c-473f-a082-092b794ba4cd" (UID: "e68e3115-552c-473f-a082-092b794ba4cd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.108377 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e68e3115-552c-473f-a082-092b794ba4cd" (UID: "e68e3115-552c-473f-a082-092b794ba4cd"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.108702 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e68e3115-552c-473f-a082-092b794ba4cd" (UID: "e68e3115-552c-473f-a082-092b794ba4cd"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.112623 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-inventory" (OuterVolumeSpecName: "inventory") pod "e68e3115-552c-473f-a082-092b794ba4cd" (UID: "e68e3115-552c-473f-a082-092b794ba4cd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.114296 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e68e3115-552c-473f-a082-092b794ba4cd" (UID: "e68e3115-552c-473f-a082-092b794ba4cd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.180913 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nklp4\" (UniqueName: \"kubernetes.io/projected/e68e3115-552c-473f-a082-092b794ba4cd-kube-api-access-nklp4\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.180943 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.180957 4903 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.180970 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.180982 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.180990 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e68e3115-552c-473f-a082-092b794ba4cd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.413866 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:58:56 crc kubenswrapper[4903]: E0128 17:58:56.414407 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.542728 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" event={"ID":"e68e3115-552c-473f-a082-092b794ba4cd","Type":"ContainerDied","Data":"98d9ca520d7b581f5fdc789fc7ae6cf12b03a231aaacec4279540e5c4d91d347"} Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.542791 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d9ca520d7b581f5fdc789fc7ae6cf12b03a231aaacec4279540e5c4d91d347" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.542791 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-pt55w" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.640140 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-4vscg"] Jan 28 17:58:56 crc kubenswrapper[4903]: E0128 17:58:56.640748 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="extract-content" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.640772 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="extract-content" Jan 28 17:58:56 crc kubenswrapper[4903]: E0128 17:58:56.640802 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68e3115-552c-473f-a082-092b794ba4cd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.640812 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68e3115-552c-473f-a082-092b794ba4cd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 28 17:58:56 crc kubenswrapper[4903]: E0128 17:58:56.640824 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="registry-server" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.640832 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="registry-server" Jan 28 17:58:56 crc kubenswrapper[4903]: E0128 17:58:56.640844 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="extract-utilities" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.640852 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="extract-utilities" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.641092 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="833a5638-5a12-4b8d-9ca2-d2f2e87c861b" containerName="registry-server" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.641124 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68e3115-552c-473f-a082-092b794ba4cd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.642071 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.643707 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.643942 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.644046 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.644554 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.645360 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.651486 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-4vscg"] Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.793617 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.793745 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-inventory\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.793774 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4n4\" (UniqueName: \"kubernetes.io/projected/f6bc3d5c-5760-45be-ba51-3337d607a4cd-kube-api-access-qf4n4\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.793845 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.794004 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.896123 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.896210 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.896327 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-inventory\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.896355 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf4n4\" (UniqueName: \"kubernetes.io/projected/f6bc3d5c-5760-45be-ba51-3337d607a4cd-kube-api-access-qf4n4\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.896438 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.902436 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-inventory\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.904816 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.905864 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.909291 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:56 crc kubenswrapper[4903]: I0128 17:58:56.947809 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf4n4\" (UniqueName: \"kubernetes.io/projected/f6bc3d5c-5760-45be-ba51-3337d607a4cd-kube-api-access-qf4n4\") pod \"libvirt-openstack-openstack-cell1-4vscg\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:57 crc kubenswrapper[4903]: I0128 17:58:57.006847 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 17:58:57 crc kubenswrapper[4903]: I0128 17:58:57.569426 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-4vscg"] Jan 28 17:58:57 crc kubenswrapper[4903]: I0128 17:58:57.579275 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 17:58:58 crc kubenswrapper[4903]: I0128 17:58:58.566244 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" event={"ID":"f6bc3d5c-5760-45be-ba51-3337d607a4cd","Type":"ContainerStarted","Data":"5a0270e957f21190708dbbabe6dc0a09dccdbdb92294a72130edf0a34fc93e61"} Jan 28 17:59:01 crc kubenswrapper[4903]: I0128 17:59:01.593150 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" event={"ID":"f6bc3d5c-5760-45be-ba51-3337d607a4cd","Type":"ContainerStarted","Data":"e054a606ccf23310a657e230caafaec2dfe0a4842e0d9670a3b3228afa928c8e"} Jan 28 17:59:01 crc kubenswrapper[4903]: I0128 17:59:01.616855 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" podStartSLOduration=2.721871471 podStartE2EDuration="5.616829777s" podCreationTimestamp="2026-01-28 17:58:56 +0000 UTC" firstStartedPulling="2026-01-28 17:58:57.578956788 +0000 UTC m=+8009.854928299" lastFinishedPulling="2026-01-28 17:59:00.473915094 +0000 UTC m=+8012.749886605" observedRunningTime="2026-01-28 17:59:01.612148019 +0000 UTC m=+8013.888119560" watchObservedRunningTime="2026-01-28 17:59:01.616829777 +0000 UTC m=+8013.892801309" Jan 28 17:59:09 crc kubenswrapper[4903]: I0128 17:59:09.413714 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:59:09 crc kubenswrapper[4903]: E0128 17:59:09.414502 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:59:21 crc kubenswrapper[4903]: I0128 17:59:21.414064 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:59:21 crc kubenswrapper[4903]: E0128 17:59:21.414828 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:59:32 crc kubenswrapper[4903]: I0128 17:59:32.414106 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:59:32 crc kubenswrapper[4903]: E0128 17:59:32.414957 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:59:46 crc kubenswrapper[4903]: I0128 17:59:46.414348 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:59:46 crc kubenswrapper[4903]: E0128 17:59:46.415886 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 17:59:59 crc kubenswrapper[4903]: I0128 17:59:59.413219 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 17:59:59 crc kubenswrapper[4903]: E0128 17:59:59.413896 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.169809 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4"] Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.171800 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.175054 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.175127 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.194891 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4"] Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.361234 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc71393-59cc-4886-84d9-da6c5087785e-secret-volume\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.361443 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc71393-59cc-4886-84d9-da6c5087785e-config-volume\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.361492 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thk25\" (UniqueName: \"kubernetes.io/projected/6cc71393-59cc-4886-84d9-da6c5087785e-kube-api-access-thk25\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.465412 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc71393-59cc-4886-84d9-da6c5087785e-secret-volume\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.467514 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc71393-59cc-4886-84d9-da6c5087785e-config-volume\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.467936 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thk25\" (UniqueName: \"kubernetes.io/projected/6cc71393-59cc-4886-84d9-da6c5087785e-kube-api-access-thk25\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.468337 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc71393-59cc-4886-84d9-da6c5087785e-config-volume\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.476389 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc71393-59cc-4886-84d9-da6c5087785e-secret-volume\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.489585 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thk25\" (UniqueName: \"kubernetes.io/projected/6cc71393-59cc-4886-84d9-da6c5087785e-kube-api-access-thk25\") pod \"collect-profiles-29493720-knwn4\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:00 crc kubenswrapper[4903]: I0128 18:00:00.492900 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:01 crc kubenswrapper[4903]: I0128 18:00:01.032947 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4"] Jan 28 18:00:01 crc kubenswrapper[4903]: I0128 18:00:01.316896 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" event={"ID":"6cc71393-59cc-4886-84d9-da6c5087785e","Type":"ContainerStarted","Data":"24d8dd646837f30c9d1292461d4addb3f3c1285d114e9c7dffc2d06d017f6148"} Jan 28 18:00:02 crc kubenswrapper[4903]: I0128 18:00:02.326274 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" event={"ID":"6cc71393-59cc-4886-84d9-da6c5087785e","Type":"ContainerStarted","Data":"6e4d0da0029951322d24a5e93ee0b712933603b7bccbd11ec86b3a2db7a32d81"} Jan 28 18:00:02 crc kubenswrapper[4903]: I0128 18:00:02.348344 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" podStartSLOduration=2.348325678 podStartE2EDuration="2.348325678s" podCreationTimestamp="2026-01-28 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:00:02.338595022 +0000 UTC m=+8074.614566533" watchObservedRunningTime="2026-01-28 18:00:02.348325678 +0000 UTC m=+8074.624297189" Jan 28 18:00:03 crc kubenswrapper[4903]: I0128 18:00:03.354675 4903 generic.go:334] "Generic (PLEG): container finished" podID="6cc71393-59cc-4886-84d9-da6c5087785e" containerID="6e4d0da0029951322d24a5e93ee0b712933603b7bccbd11ec86b3a2db7a32d81" exitCode=0 Jan 28 18:00:03 crc kubenswrapper[4903]: I0128 18:00:03.354785 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" event={"ID":"6cc71393-59cc-4886-84d9-da6c5087785e","Type":"ContainerDied","Data":"6e4d0da0029951322d24a5e93ee0b712933603b7bccbd11ec86b3a2db7a32d81"} Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.546019 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jscvm"] Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.548543 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.579058 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jscvm"] Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.708501 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9h2m\" (UniqueName: \"kubernetes.io/projected/bd21e445-600a-43a4-be88-530a9546cf4f-kube-api-access-f9h2m\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.708738 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-utilities\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.708779 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-catalog-content\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.779135 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.810902 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-utilities\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.810978 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-catalog-content\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.811077 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9h2m\" (UniqueName: \"kubernetes.io/projected/bd21e445-600a-43a4-be88-530a9546cf4f-kube-api-access-f9h2m\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.812127 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-catalog-content\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.812498 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-utilities\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.838856 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9h2m\" (UniqueName: \"kubernetes.io/projected/bd21e445-600a-43a4-be88-530a9546cf4f-kube-api-access-f9h2m\") pod \"community-operators-jscvm\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.888159 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.916199 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc71393-59cc-4886-84d9-da6c5087785e-config-volume\") pod \"6cc71393-59cc-4886-84d9-da6c5087785e\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.916284 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thk25\" (UniqueName: \"kubernetes.io/projected/6cc71393-59cc-4886-84d9-da6c5087785e-kube-api-access-thk25\") pod \"6cc71393-59cc-4886-84d9-da6c5087785e\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.916306 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc71393-59cc-4886-84d9-da6c5087785e-secret-volume\") pod \"6cc71393-59cc-4886-84d9-da6c5087785e\" (UID: \"6cc71393-59cc-4886-84d9-da6c5087785e\") " Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.917121 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc71393-59cc-4886-84d9-da6c5087785e-config-volume" (OuterVolumeSpecName: "config-volume") pod "6cc71393-59cc-4886-84d9-da6c5087785e" (UID: "6cc71393-59cc-4886-84d9-da6c5087785e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.922052 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc71393-59cc-4886-84d9-da6c5087785e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6cc71393-59cc-4886-84d9-da6c5087785e" (UID: "6cc71393-59cc-4886-84d9-da6c5087785e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:00:04 crc kubenswrapper[4903]: I0128 18:00:04.925010 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc71393-59cc-4886-84d9-da6c5087785e-kube-api-access-thk25" (OuterVolumeSpecName: "kube-api-access-thk25") pod "6cc71393-59cc-4886-84d9-da6c5087785e" (UID: "6cc71393-59cc-4886-84d9-da6c5087785e"). InnerVolumeSpecName "kube-api-access-thk25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.018777 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thk25\" (UniqueName: \"kubernetes.io/projected/6cc71393-59cc-4886-84d9-da6c5087785e-kube-api-access-thk25\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.018816 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cc71393-59cc-4886-84d9-da6c5087785e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.018830 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cc71393-59cc-4886-84d9-da6c5087785e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.380606 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" event={"ID":"6cc71393-59cc-4886-84d9-da6c5087785e","Type":"ContainerDied","Data":"24d8dd646837f30c9d1292461d4addb3f3c1285d114e9c7dffc2d06d017f6148"} Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.381062 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d8dd646837f30c9d1292461d4addb3f3c1285d114e9c7dffc2d06d017f6148" Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.381139 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493720-knwn4" Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.436824 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq"] Jan 28 18:00:05 crc kubenswrapper[4903]: W0128 18:00:05.449075 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice/crio-fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60 WatchSource:0}: Error finding container fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60: Status 404 returned error can't find the container with id fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60 Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.450289 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493675-vd2rq"] Jan 28 18:00:05 crc kubenswrapper[4903]: I0128 18:00:05.466840 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jscvm"] Jan 28 18:00:06 crc kubenswrapper[4903]: I0128 18:00:06.393477 4903 generic.go:334] "Generic (PLEG): container finished" podID="bd21e445-600a-43a4-be88-530a9546cf4f" containerID="841a8ff6b347dd8d5f9987bdcfc5fcacd2d868505c28da0bafdbfd712b987615" exitCode=0 Jan 28 18:00:06 crc kubenswrapper[4903]: I0128 18:00:06.393553 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jscvm" event={"ID":"bd21e445-600a-43a4-be88-530a9546cf4f","Type":"ContainerDied","Data":"841a8ff6b347dd8d5f9987bdcfc5fcacd2d868505c28da0bafdbfd712b987615"} Jan 28 18:00:06 crc kubenswrapper[4903]: I0128 18:00:06.393786 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jscvm" event={"ID":"bd21e445-600a-43a4-be88-530a9546cf4f","Type":"ContainerStarted","Data":"fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60"} Jan 28 18:00:06 crc kubenswrapper[4903]: I0128 18:00:06.435627 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc39a5ce-e947-4b9d-9d49-dc984dcdb46b" path="/var/lib/kubelet/pods/fc39a5ce-e947-4b9d-9d49-dc984dcdb46b/volumes" Jan 28 18:00:08 crc kubenswrapper[4903]: I0128 18:00:08.428919 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jscvm" event={"ID":"bd21e445-600a-43a4-be88-530a9546cf4f","Type":"ContainerStarted","Data":"925f0a806989fa8c79e0b67bc01937038e48da4d72740c5f68336a3bfc8023e8"} Jan 28 18:00:12 crc kubenswrapper[4903]: I0128 18:00:12.457930 4903 generic.go:334] "Generic (PLEG): container finished" podID="bd21e445-600a-43a4-be88-530a9546cf4f" containerID="925f0a806989fa8c79e0b67bc01937038e48da4d72740c5f68336a3bfc8023e8" exitCode=0 Jan 28 18:00:12 crc kubenswrapper[4903]: I0128 18:00:12.458022 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jscvm" event={"ID":"bd21e445-600a-43a4-be88-530a9546cf4f","Type":"ContainerDied","Data":"925f0a806989fa8c79e0b67bc01937038e48da4d72740c5f68336a3bfc8023e8"} Jan 28 18:00:13 crc kubenswrapper[4903]: I0128 18:00:13.415210 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 18:00:13 crc kubenswrapper[4903]: E0128 18:00:13.415647 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:00:14 crc kubenswrapper[4903]: I0128 18:00:14.479089 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jscvm" event={"ID":"bd21e445-600a-43a4-be88-530a9546cf4f","Type":"ContainerStarted","Data":"3d6ada1105a8786869b51421f3e8919030a4cb085a597af100b14ca1e7ffa574"} Jan 28 18:00:14 crc kubenswrapper[4903]: I0128 18:00:14.504486 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jscvm" podStartSLOduration=3.321792259 podStartE2EDuration="10.50446204s" podCreationTimestamp="2026-01-28 18:00:04 +0000 UTC" firstStartedPulling="2026-01-28 18:00:06.39580413 +0000 UTC m=+8078.671775641" lastFinishedPulling="2026-01-28 18:00:13.578473881 +0000 UTC m=+8085.854445422" observedRunningTime="2026-01-28 18:00:14.496604485 +0000 UTC m=+8086.772576016" watchObservedRunningTime="2026-01-28 18:00:14.50446204 +0000 UTC m=+8086.780433561" Jan 28 18:00:14 crc kubenswrapper[4903]: I0128 18:00:14.888928 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:14 crc kubenswrapper[4903]: I0128 18:00:14.889035 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:15 crc kubenswrapper[4903]: I0128 18:00:15.948639 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jscvm" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="registry-server" probeResult="failure" output=< Jan 28 18:00:15 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 18:00:15 crc kubenswrapper[4903]: > Jan 28 18:00:24 crc kubenswrapper[4903]: I0128 18:00:24.943884 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:25 crc kubenswrapper[4903]: I0128 18:00:25.008008 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:25 crc kubenswrapper[4903]: I0128 18:00:25.180225 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jscvm"] Jan 28 18:00:26 crc kubenswrapper[4903]: I0128 18:00:26.609311 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jscvm" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="registry-server" containerID="cri-o://3d6ada1105a8786869b51421f3e8919030a4cb085a597af100b14ca1e7ffa574" gracePeriod=2 Jan 28 18:00:27 crc kubenswrapper[4903]: I0128 18:00:27.621003 4903 generic.go:334] "Generic (PLEG): container finished" podID="bd21e445-600a-43a4-be88-530a9546cf4f" containerID="3d6ada1105a8786869b51421f3e8919030a4cb085a597af100b14ca1e7ffa574" exitCode=0 Jan 28 18:00:27 crc kubenswrapper[4903]: I0128 18:00:27.621057 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jscvm" event={"ID":"bd21e445-600a-43a4-be88-530a9546cf4f","Type":"ContainerDied","Data":"3d6ada1105a8786869b51421f3e8919030a4cb085a597af100b14ca1e7ffa574"} Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.151522 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.164225 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9h2m\" (UniqueName: \"kubernetes.io/projected/bd21e445-600a-43a4-be88-530a9546cf4f-kube-api-access-f9h2m\") pod \"bd21e445-600a-43a4-be88-530a9546cf4f\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.164788 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-catalog-content\") pod \"bd21e445-600a-43a4-be88-530a9546cf4f\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.165055 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-utilities\") pod \"bd21e445-600a-43a4-be88-530a9546cf4f\" (UID: \"bd21e445-600a-43a4-be88-530a9546cf4f\") " Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.165904 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-utilities" (OuterVolumeSpecName: "utilities") pod "bd21e445-600a-43a4-be88-530a9546cf4f" (UID: "bd21e445-600a-43a4-be88-530a9546cf4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.169662 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.175945 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd21e445-600a-43a4-be88-530a9546cf4f-kube-api-access-f9h2m" (OuterVolumeSpecName: "kube-api-access-f9h2m") pod "bd21e445-600a-43a4-be88-530a9546cf4f" (UID: "bd21e445-600a-43a4-be88-530a9546cf4f"). InnerVolumeSpecName "kube-api-access-f9h2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.227826 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd21e445-600a-43a4-be88-530a9546cf4f" (UID: "bd21e445-600a-43a4-be88-530a9546cf4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.272449 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9h2m\" (UniqueName: \"kubernetes.io/projected/bd21e445-600a-43a4-be88-530a9546cf4f-kube-api-access-f9h2m\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.272883 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd21e445-600a-43a4-be88-530a9546cf4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.424138 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 18:00:28 crc kubenswrapper[4903]: E0128 18:00:28.424693 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.637719 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jscvm" event={"ID":"bd21e445-600a-43a4-be88-530a9546cf4f","Type":"ContainerDied","Data":"fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60"} Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.637792 4903 scope.go:117] "RemoveContainer" containerID="3d6ada1105a8786869b51421f3e8919030a4cb085a597af100b14ca1e7ffa574" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.637990 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jscvm" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.669453 4903 scope.go:117] "RemoveContainer" containerID="925f0a806989fa8c79e0b67bc01937038e48da4d72740c5f68336a3bfc8023e8" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.680129 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jscvm"] Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.694192 4903 scope.go:117] "RemoveContainer" containerID="841a8ff6b347dd8d5f9987bdcfc5fcacd2d868505c28da0bafdbfd712b987615" Jan 28 18:00:28 crc kubenswrapper[4903]: I0128 18:00:28.696818 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jscvm"] Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.431670 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" path="/var/lib/kubelet/pods/bd21e445-600a-43a4-be88-530a9546cf4f/volumes" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.597242 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9bbq"] Jan 28 18:00:30 crc kubenswrapper[4903]: E0128 18:00:30.597798 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="extract-content" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.597824 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="extract-content" Jan 28 18:00:30 crc kubenswrapper[4903]: E0128 18:00:30.597838 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="registry-server" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.597845 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="registry-server" Jan 28 18:00:30 crc kubenswrapper[4903]: E0128 18:00:30.597855 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="extract-utilities" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.597862 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="extract-utilities" Jan 28 18:00:30 crc kubenswrapper[4903]: E0128 18:00:30.597885 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc71393-59cc-4886-84d9-da6c5087785e" containerName="collect-profiles" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.597890 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc71393-59cc-4886-84d9-da6c5087785e" containerName="collect-profiles" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.598105 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd21e445-600a-43a4-be88-530a9546cf4f" containerName="registry-server" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.598127 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc71393-59cc-4886-84d9-da6c5087785e" containerName="collect-profiles" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.599774 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.613860 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9bbq"] Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.724007 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-utilities\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.724126 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-catalog-content\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.724170 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7trj\" (UniqueName: \"kubernetes.io/projected/d64808a7-d711-4490-b728-784aeb88ce5e-kube-api-access-l7trj\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.826494 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-utilities\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.826661 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-catalog-content\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.826715 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7trj\" (UniqueName: \"kubernetes.io/projected/d64808a7-d711-4490-b728-784aeb88ce5e-kube-api-access-l7trj\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.827130 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-utilities\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.827345 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-catalog-content\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.850769 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7trj\" (UniqueName: \"kubernetes.io/projected/d64808a7-d711-4490-b728-784aeb88ce5e-kube-api-access-l7trj\") pod \"certified-operators-d9bbq\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:30 crc kubenswrapper[4903]: I0128 18:00:30.932003 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:31 crc kubenswrapper[4903]: I0128 18:00:31.486240 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9bbq"] Jan 28 18:00:31 crc kubenswrapper[4903]: W0128 18:00:31.488180 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd64808a7_d711_4490_b728_784aeb88ce5e.slice/crio-6f1df2544644954456516d1866e1226aaa4b170f6ec0981f51fb543c655c8b56 WatchSource:0}: Error finding container 6f1df2544644954456516d1866e1226aaa4b170f6ec0981f51fb543c655c8b56: Status 404 returned error can't find the container with id 6f1df2544644954456516d1866e1226aaa4b170f6ec0981f51fb543c655c8b56 Jan 28 18:00:31 crc kubenswrapper[4903]: E0128 18:00:31.639285 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice/crio-fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60\": RecentStats: unable to find data in memory cache]" Jan 28 18:00:31 crc kubenswrapper[4903]: I0128 18:00:31.668849 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9bbq" event={"ID":"d64808a7-d711-4490-b728-784aeb88ce5e","Type":"ContainerStarted","Data":"6f1df2544644954456516d1866e1226aaa4b170f6ec0981f51fb543c655c8b56"} Jan 28 18:00:32 crc kubenswrapper[4903]: I0128 18:00:32.683224 4903 generic.go:334] "Generic (PLEG): container finished" podID="d64808a7-d711-4490-b728-784aeb88ce5e" containerID="1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311" exitCode=0 Jan 28 18:00:32 crc kubenswrapper[4903]: I0128 18:00:32.683313 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9bbq" event={"ID":"d64808a7-d711-4490-b728-784aeb88ce5e","Type":"ContainerDied","Data":"1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311"} Jan 28 18:00:36 crc kubenswrapper[4903]: I0128 18:00:36.732713 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9bbq" event={"ID":"d64808a7-d711-4490-b728-784aeb88ce5e","Type":"ContainerStarted","Data":"dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d"} Jan 28 18:00:39 crc kubenswrapper[4903]: I0128 18:00:39.766468 4903 generic.go:334] "Generic (PLEG): container finished" podID="d64808a7-d711-4490-b728-784aeb88ce5e" containerID="dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d" exitCode=0 Jan 28 18:00:39 crc kubenswrapper[4903]: I0128 18:00:39.766617 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9bbq" event={"ID":"d64808a7-d711-4490-b728-784aeb88ce5e","Type":"ContainerDied","Data":"dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d"} Jan 28 18:00:41 crc kubenswrapper[4903]: I0128 18:00:41.413874 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 18:00:41 crc kubenswrapper[4903]: E0128 18:00:41.414584 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:00:41 crc kubenswrapper[4903]: E0128 18:00:41.901390 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice/crio-fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60\": RecentStats: unable to find data in memory cache]" Jan 28 18:00:42 crc kubenswrapper[4903]: I0128 18:00:42.801105 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9bbq" event={"ID":"d64808a7-d711-4490-b728-784aeb88ce5e","Type":"ContainerStarted","Data":"9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753"} Jan 28 18:00:42 crc kubenswrapper[4903]: I0128 18:00:42.821973 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9bbq" podStartSLOduration=3.998342244 podStartE2EDuration="12.821956049s" podCreationTimestamp="2026-01-28 18:00:30 +0000 UTC" firstStartedPulling="2026-01-28 18:00:32.686320335 +0000 UTC m=+8104.962291876" lastFinishedPulling="2026-01-28 18:00:41.50993417 +0000 UTC m=+8113.785905681" observedRunningTime="2026-01-28 18:00:42.817877648 +0000 UTC m=+8115.093849179" watchObservedRunningTime="2026-01-28 18:00:42.821956049 +0000 UTC m=+8115.097927570" Jan 28 18:00:50 crc kubenswrapper[4903]: I0128 18:00:50.933424 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:50 crc kubenswrapper[4903]: I0128 18:00:50.933986 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:50 crc kubenswrapper[4903]: I0128 18:00:50.979166 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:51 crc kubenswrapper[4903]: I0128 18:00:51.936828 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:52 crc kubenswrapper[4903]: I0128 18:00:52.002621 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9bbq"] Jan 28 18:00:52 crc kubenswrapper[4903]: E0128 18:00:52.181272 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice/crio-fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60\": RecentStats: unable to find data in memory cache]" Jan 28 18:00:53 crc kubenswrapper[4903]: I0128 18:00:53.911689 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9bbq" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="registry-server" containerID="cri-o://9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753" gracePeriod=2 Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.468600 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.530045 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-catalog-content\") pod \"d64808a7-d711-4490-b728-784aeb88ce5e\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.530181 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7trj\" (UniqueName: \"kubernetes.io/projected/d64808a7-d711-4490-b728-784aeb88ce5e-kube-api-access-l7trj\") pod \"d64808a7-d711-4490-b728-784aeb88ce5e\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.530312 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-utilities\") pod \"d64808a7-d711-4490-b728-784aeb88ce5e\" (UID: \"d64808a7-d711-4490-b728-784aeb88ce5e\") " Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.532287 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-utilities" (OuterVolumeSpecName: "utilities") pod "d64808a7-d711-4490-b728-784aeb88ce5e" (UID: "d64808a7-d711-4490-b728-784aeb88ce5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.539175 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d64808a7-d711-4490-b728-784aeb88ce5e-kube-api-access-l7trj" (OuterVolumeSpecName: "kube-api-access-l7trj") pod "d64808a7-d711-4490-b728-784aeb88ce5e" (UID: "d64808a7-d711-4490-b728-784aeb88ce5e"). InnerVolumeSpecName "kube-api-access-l7trj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.582407 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d64808a7-d711-4490-b728-784aeb88ce5e" (UID: "d64808a7-d711-4490-b728-784aeb88ce5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.632186 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.632219 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d64808a7-d711-4490-b728-784aeb88ce5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.632230 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7trj\" (UniqueName: \"kubernetes.io/projected/d64808a7-d711-4490-b728-784aeb88ce5e-kube-api-access-l7trj\") on node \"crc\" DevicePath \"\"" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.925372 4903 generic.go:334] "Generic (PLEG): container finished" podID="d64808a7-d711-4490-b728-784aeb88ce5e" containerID="9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753" exitCode=0 Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.925410 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9bbq" event={"ID":"d64808a7-d711-4490-b728-784aeb88ce5e","Type":"ContainerDied","Data":"9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753"} Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.925501 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9bbq" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.925517 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9bbq" event={"ID":"d64808a7-d711-4490-b728-784aeb88ce5e","Type":"ContainerDied","Data":"6f1df2544644954456516d1866e1226aaa4b170f6ec0981f51fb543c655c8b56"} Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.925542 4903 scope.go:117] "RemoveContainer" containerID="9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.960080 4903 scope.go:117] "RemoveContainer" containerID="dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d" Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.986200 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9bbq"] Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.997891 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9bbq"] Jan 28 18:00:54 crc kubenswrapper[4903]: I0128 18:00:54.999845 4903 scope.go:117] "RemoveContainer" containerID="1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311" Jan 28 18:00:55 crc kubenswrapper[4903]: I0128 18:00:55.061334 4903 scope.go:117] "RemoveContainer" containerID="9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753" Jan 28 18:00:55 crc kubenswrapper[4903]: E0128 18:00:55.062209 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753\": container with ID starting with 9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753 not found: ID does not exist" containerID="9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753" Jan 28 18:00:55 crc kubenswrapper[4903]: I0128 18:00:55.062265 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753"} err="failed to get container status \"9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753\": rpc error: code = NotFound desc = could not find container \"9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753\": container with ID starting with 9927b459ac9d8ba16eeddc20782ccd74d836271fdadccd6686bf8c60e9e2f753 not found: ID does not exist" Jan 28 18:00:55 crc kubenswrapper[4903]: I0128 18:00:55.062291 4903 scope.go:117] "RemoveContainer" containerID="dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d" Jan 28 18:00:55 crc kubenswrapper[4903]: E0128 18:00:55.062731 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d\": container with ID starting with dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d not found: ID does not exist" containerID="dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d" Jan 28 18:00:55 crc kubenswrapper[4903]: I0128 18:00:55.062783 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d"} err="failed to get container status \"dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d\": rpc error: code = NotFound desc = could not find container \"dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d\": container with ID starting with dc88dcdbc0745235df3cf7c79964a850c6cc5579b593fcf681ade6d6b6d8569d not found: ID does not exist" Jan 28 18:00:55 crc kubenswrapper[4903]: I0128 18:00:55.062806 4903 scope.go:117] "RemoveContainer" containerID="1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311" Jan 28 18:00:55 crc kubenswrapper[4903]: E0128 18:00:55.063018 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311\": container with ID starting with 1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311 not found: ID does not exist" containerID="1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311" Jan 28 18:00:55 crc kubenswrapper[4903]: I0128 18:00:55.063044 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311"} err="failed to get container status \"1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311\": rpc error: code = NotFound desc = could not find container \"1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311\": container with ID starting with 1d32b0d3fd6d9b1ad100d42481f9d6bb6f34e8dde4b5b211363326764d64a311 not found: ID does not exist" Jan 28 18:00:56 crc kubenswrapper[4903]: I0128 18:00:56.414253 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 18:00:56 crc kubenswrapper[4903]: E0128 18:00:56.415059 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:00:56 crc kubenswrapper[4903]: I0128 18:00:56.437804 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" path="/var/lib/kubelet/pods/d64808a7-d711-4490-b728-784aeb88ce5e/volumes" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.163702 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29493721-qm6m7"] Jan 28 18:01:00 crc kubenswrapper[4903]: E0128 18:01:00.168353 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="extract-utilities" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.168657 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="extract-utilities" Jan 28 18:01:00 crc kubenswrapper[4903]: E0128 18:01:00.168672 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="registry-server" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.168680 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="registry-server" Jan 28 18:01:00 crc kubenswrapper[4903]: E0128 18:01:00.168705 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="extract-content" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.168711 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="extract-content" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.168939 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d64808a7-d711-4490-b728-784aeb88ce5e" containerName="registry-server" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.169863 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.175891 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493721-qm6m7"] Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.360435 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-config-data\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.360801 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-combined-ca-bundle\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.361124 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6ld\" (UniqueName: \"kubernetes.io/projected/fdd15b30-7292-4150-8814-62f2e3811fbf-kube-api-access-2l6ld\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.361315 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-fernet-keys\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.463908 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-combined-ca-bundle\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.464270 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6ld\" (UniqueName: \"kubernetes.io/projected/fdd15b30-7292-4150-8814-62f2e3811fbf-kube-api-access-2l6ld\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.464337 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-fernet-keys\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.464579 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-config-data\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.474278 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-combined-ca-bundle\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.477304 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-fernet-keys\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.486074 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-config-data\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.486701 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6ld\" (UniqueName: \"kubernetes.io/projected/fdd15b30-7292-4150-8814-62f2e3811fbf-kube-api-access-2l6ld\") pod \"keystone-cron-29493721-qm6m7\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.498064 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:00 crc kubenswrapper[4903]: I0128 18:01:00.746301 4903 scope.go:117] "RemoveContainer" containerID="eadd54a8ec7affa286c963983a051d7e9780c6557f0a4b11cec81cb658c8e97a" Jan 28 18:01:01 crc kubenswrapper[4903]: I0128 18:01:01.029961 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493721-qm6m7"] Jan 28 18:01:02 crc kubenswrapper[4903]: I0128 18:01:02.012788 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493721-qm6m7" event={"ID":"fdd15b30-7292-4150-8814-62f2e3811fbf","Type":"ContainerStarted","Data":"e41370de08ff6c2d8d60b41a62265244257d207d803f1bd4ee9393029020a1af"} Jan 28 18:01:02 crc kubenswrapper[4903]: I0128 18:01:02.013126 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493721-qm6m7" event={"ID":"fdd15b30-7292-4150-8814-62f2e3811fbf","Type":"ContainerStarted","Data":"74d2cbed78df2e47814836b005e59e8d1edbae707b2b16c2c43a0db78d6fa442"} Jan 28 18:01:02 crc kubenswrapper[4903]: I0128 18:01:02.041040 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29493721-qm6m7" podStartSLOduration=2.041019136 podStartE2EDuration="2.041019136s" podCreationTimestamp="2026-01-28 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:01:02.03198068 +0000 UTC m=+8134.307952191" watchObservedRunningTime="2026-01-28 18:01:02.041019136 +0000 UTC m=+8134.316990647" Jan 28 18:01:02 crc kubenswrapper[4903]: E0128 18:01:02.432185 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice/crio-fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60\": RecentStats: unable to find data in memory cache]" Jan 28 18:01:06 crc kubenswrapper[4903]: I0128 18:01:06.052961 4903 generic.go:334] "Generic (PLEG): container finished" podID="fdd15b30-7292-4150-8814-62f2e3811fbf" containerID="e41370de08ff6c2d8d60b41a62265244257d207d803f1bd4ee9393029020a1af" exitCode=0 Jan 28 18:01:06 crc kubenswrapper[4903]: I0128 18:01:06.053036 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493721-qm6m7" event={"ID":"fdd15b30-7292-4150-8814-62f2e3811fbf","Type":"ContainerDied","Data":"e41370de08ff6c2d8d60b41a62265244257d207d803f1bd4ee9393029020a1af"} Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.413927 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.454612 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.473234 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-fernet-keys\") pod \"fdd15b30-7292-4150-8814-62f2e3811fbf\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.473406 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-config-data\") pod \"fdd15b30-7292-4150-8814-62f2e3811fbf\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.473713 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-combined-ca-bundle\") pod \"fdd15b30-7292-4150-8814-62f2e3811fbf\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.475575 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l6ld\" (UniqueName: \"kubernetes.io/projected/fdd15b30-7292-4150-8814-62f2e3811fbf-kube-api-access-2l6ld\") pod \"fdd15b30-7292-4150-8814-62f2e3811fbf\" (UID: \"fdd15b30-7292-4150-8814-62f2e3811fbf\") " Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.483378 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd15b30-7292-4150-8814-62f2e3811fbf-kube-api-access-2l6ld" (OuterVolumeSpecName: "kube-api-access-2l6ld") pod "fdd15b30-7292-4150-8814-62f2e3811fbf" (UID: "fdd15b30-7292-4150-8814-62f2e3811fbf"). InnerVolumeSpecName "kube-api-access-2l6ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.484994 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fdd15b30-7292-4150-8814-62f2e3811fbf" (UID: "fdd15b30-7292-4150-8814-62f2e3811fbf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.511670 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdd15b30-7292-4150-8814-62f2e3811fbf" (UID: "fdd15b30-7292-4150-8814-62f2e3811fbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.558013 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-config-data" (OuterVolumeSpecName: "config-data") pod "fdd15b30-7292-4150-8814-62f2e3811fbf" (UID: "fdd15b30-7292-4150-8814-62f2e3811fbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.578950 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.578987 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.579000 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l6ld\" (UniqueName: \"kubernetes.io/projected/fdd15b30-7292-4150-8814-62f2e3811fbf-kube-api-access-2l6ld\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:07 crc kubenswrapper[4903]: I0128 18:01:07.579010 4903 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fdd15b30-7292-4150-8814-62f2e3811fbf-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:01:08 crc kubenswrapper[4903]: I0128 18:01:08.084228 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"fbed65eaeb581b02889042335fcad240f34d8f4ae585dde0f6f79715993eda35"} Jan 28 18:01:08 crc kubenswrapper[4903]: I0128 18:01:08.087685 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493721-qm6m7" event={"ID":"fdd15b30-7292-4150-8814-62f2e3811fbf","Type":"ContainerDied","Data":"74d2cbed78df2e47814836b005e59e8d1edbae707b2b16c2c43a0db78d6fa442"} Jan 28 18:01:08 crc kubenswrapper[4903]: I0128 18:01:08.087760 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d2cbed78df2e47814836b005e59e8d1edbae707b2b16c2c43a0db78d6fa442" Jan 28 18:01:08 crc kubenswrapper[4903]: I0128 18:01:08.087870 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493721-qm6m7" Jan 28 18:01:12 crc kubenswrapper[4903]: E0128 18:01:12.713963 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice/crio-fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:01:22 crc kubenswrapper[4903]: E0128 18:01:22.972093 4903 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd21e445_600a_43a4_be88_530a9546cf4f.slice/crio-fa36bb3374cae645ea2cd9096acefa1f9dcefcd22a427e2afb7165be20e6ff60\": RecentStats: unable to find data in memory cache]" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.401992 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-58zms"] Jan 28 18:01:24 crc kubenswrapper[4903]: E0128 18:01:24.402655 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd15b30-7292-4150-8814-62f2e3811fbf" containerName="keystone-cron" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.402671 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd15b30-7292-4150-8814-62f2e3811fbf" containerName="keystone-cron" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.402907 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd15b30-7292-4150-8814-62f2e3811fbf" containerName="keystone-cron" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.404325 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.431581 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58zms"] Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.504277 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-catalog-content\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.504833 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47p7\" (UniqueName: \"kubernetes.io/projected/83424c2c-9151-4294-a989-9596bb6f09f5-kube-api-access-z47p7\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.504908 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-utilities\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.606838 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-catalog-content\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.607018 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47p7\" (UniqueName: \"kubernetes.io/projected/83424c2c-9151-4294-a989-9596bb6f09f5-kube-api-access-z47p7\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.607081 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-utilities\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.607653 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-catalog-content\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.607677 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-utilities\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.637353 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47p7\" (UniqueName: \"kubernetes.io/projected/83424c2c-9151-4294-a989-9596bb6f09f5-kube-api-access-z47p7\") pod \"redhat-operators-58zms\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:24 crc kubenswrapper[4903]: I0128 18:01:24.728840 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:01:25 crc kubenswrapper[4903]: I0128 18:01:25.265718 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58zms"] Jan 28 18:01:25 crc kubenswrapper[4903]: I0128 18:01:25.417341 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58zms" event={"ID":"83424c2c-9151-4294-a989-9596bb6f09f5","Type":"ContainerStarted","Data":"97ccfd0b989eb677fa769407f01b45c1098c20bf94bd8734f4bee433d23336e5"} Jan 28 18:01:26 crc kubenswrapper[4903]: I0128 18:01:26.431745 4903 generic.go:334] "Generic (PLEG): container finished" podID="83424c2c-9151-4294-a989-9596bb6f09f5" containerID="2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc" exitCode=0 Jan 28 18:01:26 crc kubenswrapper[4903]: I0128 18:01:26.431817 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58zms" event={"ID":"83424c2c-9151-4294-a989-9596bb6f09f5","Type":"ContainerDied","Data":"2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc"} Jan 28 18:01:30 crc kubenswrapper[4903]: I0128 18:01:30.480697 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58zms" event={"ID":"83424c2c-9151-4294-a989-9596bb6f09f5","Type":"ContainerStarted","Data":"6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e"} Jan 28 18:01:50 crc kubenswrapper[4903]: I0128 18:01:50.694436 4903 generic.go:334] "Generic (PLEG): container finished" podID="83424c2c-9151-4294-a989-9596bb6f09f5" containerID="6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e" exitCode=0 Jan 28 18:01:50 crc kubenswrapper[4903]: I0128 18:01:50.694603 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58zms" event={"ID":"83424c2c-9151-4294-a989-9596bb6f09f5","Type":"ContainerDied","Data":"6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e"} Jan 28 18:01:57 crc kubenswrapper[4903]: I0128 18:01:57.790412 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58zms" event={"ID":"83424c2c-9151-4294-a989-9596bb6f09f5","Type":"ContainerStarted","Data":"bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202"} Jan 28 18:01:58 crc kubenswrapper[4903]: I0128 18:01:58.848391 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-58zms" podStartSLOduration=3.958219068 podStartE2EDuration="34.848367147s" podCreationTimestamp="2026-01-28 18:01:24 +0000 UTC" firstStartedPulling="2026-01-28 18:01:26.434694335 +0000 UTC m=+8158.710665846" lastFinishedPulling="2026-01-28 18:01:57.324842374 +0000 UTC m=+8189.600813925" observedRunningTime="2026-01-28 18:01:58.838257181 +0000 UTC m=+8191.114228692" watchObservedRunningTime="2026-01-28 18:01:58.848367147 +0000 UTC m=+8191.124338658" Jan 28 18:02:04 crc kubenswrapper[4903]: I0128 18:02:04.729293 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:02:04 crc kubenswrapper[4903]: I0128 18:02:04.730014 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:02:05 crc kubenswrapper[4903]: I0128 18:02:05.787613 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-58zms" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="registry-server" probeResult="failure" output=< Jan 28 18:02:05 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 18:02:05 crc kubenswrapper[4903]: > Jan 28 18:02:15 crc kubenswrapper[4903]: I0128 18:02:15.779256 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-58zms" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="registry-server" probeResult="failure" output=< Jan 28 18:02:15 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 18:02:15 crc kubenswrapper[4903]: > Jan 28 18:02:24 crc kubenswrapper[4903]: I0128 18:02:24.793830 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:02:24 crc kubenswrapper[4903]: I0128 18:02:24.852582 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:02:25 crc kubenswrapper[4903]: I0128 18:02:25.628731 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58zms"] Jan 28 18:02:26 crc kubenswrapper[4903]: I0128 18:02:26.091651 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-58zms" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="registry-server" containerID="cri-o://bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202" gracePeriod=2 Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.601542 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.724370 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z47p7\" (UniqueName: \"kubernetes.io/projected/83424c2c-9151-4294-a989-9596bb6f09f5-kube-api-access-z47p7\") pod \"83424c2c-9151-4294-a989-9596bb6f09f5\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.724465 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-catalog-content\") pod \"83424c2c-9151-4294-a989-9596bb6f09f5\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.724606 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-utilities\") pod \"83424c2c-9151-4294-a989-9596bb6f09f5\" (UID: \"83424c2c-9151-4294-a989-9596bb6f09f5\") " Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.725496 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-utilities" (OuterVolumeSpecName: "utilities") pod "83424c2c-9151-4294-a989-9596bb6f09f5" (UID: "83424c2c-9151-4294-a989-9596bb6f09f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.726221 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.733734 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83424c2c-9151-4294-a989-9596bb6f09f5-kube-api-access-z47p7" (OuterVolumeSpecName: "kube-api-access-z47p7") pod "83424c2c-9151-4294-a989-9596bb6f09f5" (UID: "83424c2c-9151-4294-a989-9596bb6f09f5"). InnerVolumeSpecName "kube-api-access-z47p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.828146 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z47p7\" (UniqueName: \"kubernetes.io/projected/83424c2c-9151-4294-a989-9596bb6f09f5-kube-api-access-z47p7\") on node \"crc\" DevicePath \"\"" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.851077 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83424c2c-9151-4294-a989-9596bb6f09f5" (UID: "83424c2c-9151-4294-a989-9596bb6f09f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:26.930811 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83424c2c-9151-4294-a989-9596bb6f09f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.105926 4903 generic.go:334] "Generic (PLEG): container finished" podID="83424c2c-9151-4294-a989-9596bb6f09f5" containerID="bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202" exitCode=0 Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.105990 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58zms" event={"ID":"83424c2c-9151-4294-a989-9596bb6f09f5","Type":"ContainerDied","Data":"bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202"} Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.106053 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58zms" event={"ID":"83424c2c-9151-4294-a989-9596bb6f09f5","Type":"ContainerDied","Data":"97ccfd0b989eb677fa769407f01b45c1098c20bf94bd8734f4bee433d23336e5"} Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.106078 4903 scope.go:117] "RemoveContainer" containerID="bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.106088 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58zms" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.160741 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58zms"] Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.163743 4903 scope.go:117] "RemoveContainer" containerID="6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.170636 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-58zms"] Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.196052 4903 scope.go:117] "RemoveContainer" containerID="2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.249653 4903 scope.go:117] "RemoveContainer" containerID="bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202" Jan 28 18:02:27 crc kubenswrapper[4903]: E0128 18:02:27.250151 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202\": container with ID starting with bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202 not found: ID does not exist" containerID="bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.250211 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202"} err="failed to get container status \"bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202\": rpc error: code = NotFound desc = could not find container \"bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202\": container with ID starting with bae28c13cdc8f3743824b41afb19fc7ef3f61cec25c6724e8ba404f8b9577202 not found: ID does not exist" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.250248 4903 scope.go:117] "RemoveContainer" containerID="6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e" Jan 28 18:02:27 crc kubenswrapper[4903]: E0128 18:02:27.250617 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e\": container with ID starting with 6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e not found: ID does not exist" containerID="6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.250657 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e"} err="failed to get container status \"6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e\": rpc error: code = NotFound desc = could not find container \"6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e\": container with ID starting with 6234b1c5fc3dcca1c824671ab8f4cd58d9df6e53efc906f79478d625272a567e not found: ID does not exist" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.250686 4903 scope.go:117] "RemoveContainer" containerID="2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc" Jan 28 18:02:27 crc kubenswrapper[4903]: E0128 18:02:27.251051 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc\": container with ID starting with 2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc not found: ID does not exist" containerID="2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc" Jan 28 18:02:27 crc kubenswrapper[4903]: I0128 18:02:27.251087 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc"} err="failed to get container status \"2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc\": rpc error: code = NotFound desc = could not find container \"2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc\": container with ID starting with 2f9504d14295535422f53c46230076b8eb6f7500b48479dacbb92be1ecdeeefc not found: ID does not exist" Jan 28 18:02:28 crc kubenswrapper[4903]: I0128 18:02:28.428283 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" path="/var/lib/kubelet/pods/83424c2c-9151-4294-a989-9596bb6f09f5/volumes" Jan 28 18:03:21 crc kubenswrapper[4903]: I0128 18:03:21.735023 4903 generic.go:334] "Generic (PLEG): container finished" podID="f6bc3d5c-5760-45be-ba51-3337d607a4cd" containerID="e054a606ccf23310a657e230caafaec2dfe0a4842e0d9670a3b3228afa928c8e" exitCode=0 Jan 28 18:03:21 crc kubenswrapper[4903]: I0128 18:03:21.735117 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" event={"ID":"f6bc3d5c-5760-45be-ba51-3337d607a4cd","Type":"ContainerDied","Data":"e054a606ccf23310a657e230caafaec2dfe0a4842e0d9670a3b3228afa928c8e"} Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.192866 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.203708 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-inventory\") pod \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.208247 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf4n4\" (UniqueName: \"kubernetes.io/projected/f6bc3d5c-5760-45be-ba51-3337d607a4cd-kube-api-access-qf4n4\") pod \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.208630 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-ssh-key-openstack-cell1\") pod \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.209008 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-secret-0\") pod \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.209107 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-combined-ca-bundle\") pod \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\" (UID: \"f6bc3d5c-5760-45be-ba51-3337d607a4cd\") " Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.213386 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f6bc3d5c-5760-45be-ba51-3337d607a4cd" (UID: "f6bc3d5c-5760-45be-ba51-3337d607a4cd"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.214949 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bc3d5c-5760-45be-ba51-3337d607a4cd-kube-api-access-qf4n4" (OuterVolumeSpecName: "kube-api-access-qf4n4") pod "f6bc3d5c-5760-45be-ba51-3337d607a4cd" (UID: "f6bc3d5c-5760-45be-ba51-3337d607a4cd"). InnerVolumeSpecName "kube-api-access-qf4n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.240419 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "f6bc3d5c-5760-45be-ba51-3337d607a4cd" (UID: "f6bc3d5c-5760-45be-ba51-3337d607a4cd"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.260148 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-inventory" (OuterVolumeSpecName: "inventory") pod "f6bc3d5c-5760-45be-ba51-3337d607a4cd" (UID: "f6bc3d5c-5760-45be-ba51-3337d607a4cd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.270372 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f6bc3d5c-5760-45be-ba51-3337d607a4cd" (UID: "f6bc3d5c-5760-45be-ba51-3337d607a4cd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.311996 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf4n4\" (UniqueName: \"kubernetes.io/projected/f6bc3d5c-5760-45be-ba51-3337d607a4cd-kube-api-access-qf4n4\") on node \"crc\" DevicePath \"\"" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.312583 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.312640 4903 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.312726 4903 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.312777 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6bc3d5c-5760-45be-ba51-3337d607a4cd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.755977 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" event={"ID":"f6bc3d5c-5760-45be-ba51-3337d607a4cd","Type":"ContainerDied","Data":"5a0270e957f21190708dbbabe6dc0a09dccdbdb92294a72130edf0a34fc93e61"} Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.756025 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0270e957f21190708dbbabe6dc0a09dccdbdb92294a72130edf0a34fc93e61" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.756092 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-4vscg" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.863198 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-7r66n"] Jan 28 18:03:23 crc kubenswrapper[4903]: E0128 18:03:23.863625 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="extract-utilities" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.863643 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="extract-utilities" Jan 28 18:03:23 crc kubenswrapper[4903]: E0128 18:03:23.863669 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6bc3d5c-5760-45be-ba51-3337d607a4cd" containerName="libvirt-openstack-openstack-cell1" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.863677 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bc3d5c-5760-45be-ba51-3337d607a4cd" containerName="libvirt-openstack-openstack-cell1" Jan 28 18:03:23 crc kubenswrapper[4903]: E0128 18:03:23.863690 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="registry-server" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.863698 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="registry-server" Jan 28 18:03:23 crc kubenswrapper[4903]: E0128 18:03:23.863712 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="extract-content" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.863718 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="extract-content" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.863907 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6bc3d5c-5760-45be-ba51-3337d607a4cd" containerName="libvirt-openstack-openstack-cell1" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.863937 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="83424c2c-9151-4294-a989-9596bb6f09f5" containerName="registry-server" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.864676 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.866605 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.866907 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.867247 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.867268 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.867712 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.867847 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.869039 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.877525 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-7r66n"] Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.928324 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.928367 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.928409 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.928514 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-inventory\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.929077 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.929126 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.929231 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.929269 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:23 crc kubenswrapper[4903]: I0128 18:03:23.929451 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmnlp\" (UniqueName: \"kubernetes.io/projected/e873be7c-55bc-4466-8ab5-e8e107ac32f5-kube-api-access-mmnlp\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.031601 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.031700 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-inventory\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.031767 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.031794 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.031842 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.031866 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.031952 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmnlp\" (UniqueName: \"kubernetes.io/projected/e873be7c-55bc-4466-8ab5-e8e107ac32f5-kube-api-access-mmnlp\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.032005 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.032035 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.033010 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.036151 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.036600 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.036693 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.036881 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.037625 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.048138 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-inventory\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.048609 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.057284 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmnlp\" (UniqueName: \"kubernetes.io/projected/e873be7c-55bc-4466-8ab5-e8e107ac32f5-kube-api-access-mmnlp\") pod \"nova-cell1-openstack-openstack-cell1-7r66n\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.194955 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.752069 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-7r66n"] Jan 28 18:03:24 crc kubenswrapper[4903]: I0128 18:03:24.775108 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" event={"ID":"e873be7c-55bc-4466-8ab5-e8e107ac32f5","Type":"ContainerStarted","Data":"4a1e0e3c3dfad29d9c890607147758832dfaea6ddbe97625e90620e24fa25b9f"} Jan 28 18:03:26 crc kubenswrapper[4903]: I0128 18:03:26.613503 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:03:26 crc kubenswrapper[4903]: I0128 18:03:26.613967 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:03:28 crc kubenswrapper[4903]: I0128 18:03:28.822755 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" event={"ID":"e873be7c-55bc-4466-8ab5-e8e107ac32f5","Type":"ContainerStarted","Data":"741caa9860ff51f8b315fa901fc44b595b2537faf23b0f3336b3c5aa849ce6b8"} Jan 28 18:03:29 crc kubenswrapper[4903]: I0128 18:03:29.866887 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" podStartSLOduration=3.478026831 podStartE2EDuration="6.866861428s" podCreationTimestamp="2026-01-28 18:03:23 +0000 UTC" firstStartedPulling="2026-01-28 18:03:24.76264989 +0000 UTC m=+8277.038621401" lastFinishedPulling="2026-01-28 18:03:28.151484487 +0000 UTC m=+8280.427455998" observedRunningTime="2026-01-28 18:03:29.855929339 +0000 UTC m=+8282.131900860" watchObservedRunningTime="2026-01-28 18:03:29.866861428 +0000 UTC m=+8282.142832949" Jan 28 18:03:56 crc kubenswrapper[4903]: I0128 18:03:56.613457 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:03:56 crc kubenswrapper[4903]: I0128 18:03:56.614035 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:04:26 crc kubenswrapper[4903]: I0128 18:04:26.613454 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:04:26 crc kubenswrapper[4903]: I0128 18:04:26.614099 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:04:26 crc kubenswrapper[4903]: I0128 18:04:26.614152 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 18:04:26 crc kubenswrapper[4903]: I0128 18:04:26.615040 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbed65eaeb581b02889042335fcad240f34d8f4ae585dde0f6f79715993eda35"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:04:26 crc kubenswrapper[4903]: I0128 18:04:26.615107 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://fbed65eaeb581b02889042335fcad240f34d8f4ae585dde0f6f79715993eda35" gracePeriod=600 Jan 28 18:04:27 crc kubenswrapper[4903]: I0128 18:04:27.445411 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="fbed65eaeb581b02889042335fcad240f34d8f4ae585dde0f6f79715993eda35" exitCode=0 Jan 28 18:04:27 crc kubenswrapper[4903]: I0128 18:04:27.445454 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"fbed65eaeb581b02889042335fcad240f34d8f4ae585dde0f6f79715993eda35"} Jan 28 18:04:27 crc kubenswrapper[4903]: I0128 18:04:27.445826 4903 scope.go:117] "RemoveContainer" containerID="6ef09804e5d689be53e8b336c4e33cd947bf16cb708d2fb40c9ec341ab75eeed" Jan 28 18:04:28 crc kubenswrapper[4903]: I0128 18:04:28.459603 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836"} Jan 28 18:06:10 crc kubenswrapper[4903]: I0128 18:06:10.470019 4903 generic.go:334] "Generic (PLEG): container finished" podID="e873be7c-55bc-4466-8ab5-e8e107ac32f5" containerID="741caa9860ff51f8b315fa901fc44b595b2537faf23b0f3336b3c5aa849ce6b8" exitCode=0 Jan 28 18:06:10 crc kubenswrapper[4903]: I0128 18:06:10.470635 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" event={"ID":"e873be7c-55bc-4466-8ab5-e8e107ac32f5","Type":"ContainerDied","Data":"741caa9860ff51f8b315fa901fc44b595b2537faf23b0f3336b3c5aa849ce6b8"} Jan 28 18:06:11 crc kubenswrapper[4903]: I0128 18:06:11.951175 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124091 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-1\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124137 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmnlp\" (UniqueName: \"kubernetes.io/projected/e873be7c-55bc-4466-8ab5-e8e107ac32f5-kube-api-access-mmnlp\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124177 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-ssh-key-openstack-cell1\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124260 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-0\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124307 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-1\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124487 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-combined-ca-bundle\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124520 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cells-global-config-0\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.124576 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-inventory\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.125039 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-0\") pod \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\" (UID: \"e873be7c-55bc-4466-8ab5-e8e107ac32f5\") " Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.146365 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.147232 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e873be7c-55bc-4466-8ab5-e8e107ac32f5-kube-api-access-mmnlp" (OuterVolumeSpecName: "kube-api-access-mmnlp") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "kube-api-access-mmnlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.159370 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.161059 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.162102 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.163470 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.174776 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-inventory" (OuterVolumeSpecName: "inventory") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.175314 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.175763 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "e873be7c-55bc-4466-8ab5-e8e107ac32f5" (UID: "e873be7c-55bc-4466-8ab5-e8e107ac32f5"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227236 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227275 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmnlp\" (UniqueName: \"kubernetes.io/projected/e873be7c-55bc-4466-8ab5-e8e107ac32f5-kube-api-access-mmnlp\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227284 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227293 4903 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227304 4903 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227312 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227321 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227331 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.227340 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/e873be7c-55bc-4466-8ab5-e8e107ac32f5-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.489866 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" event={"ID":"e873be7c-55bc-4466-8ab5-e8e107ac32f5","Type":"ContainerDied","Data":"4a1e0e3c3dfad29d9c890607147758832dfaea6ddbe97625e90620e24fa25b9f"} Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.489916 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1e0e3c3dfad29d9c890607147758832dfaea6ddbe97625e90620e24fa25b9f" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.489986 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-7r66n" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.613429 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-xmnlc"] Jan 28 18:06:12 crc kubenswrapper[4903]: E0128 18:06:12.614489 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e873be7c-55bc-4466-8ab5-e8e107ac32f5" containerName="nova-cell1-openstack-openstack-cell1" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.614515 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e873be7c-55bc-4466-8ab5-e8e107ac32f5" containerName="nova-cell1-openstack-openstack-cell1" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.614783 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e873be7c-55bc-4466-8ab5-e8e107ac32f5" containerName="nova-cell1-openstack-openstack-cell1" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.623101 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.625807 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.627643 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.627756 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.627825 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.628016 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.634560 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-xmnlc"] Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.653877 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.653931 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/f8a23799-e081-42b8-9c63-abc115dfdf94-kube-api-access-45dgc\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.654113 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.654194 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-inventory\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.654368 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.654575 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.654752 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.756425 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.756500 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-inventory\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.756590 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.756625 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.756660 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.756728 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.756765 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/f8a23799-e081-42b8-9c63-abc115dfdf94-kube-api-access-45dgc\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.760707 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.760762 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-inventory\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.761252 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.761726 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.768992 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.770505 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.779445 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/f8a23799-e081-42b8-9c63-abc115dfdf94-kube-api-access-45dgc\") pod \"telemetry-openstack-openstack-cell1-xmnlc\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:12 crc kubenswrapper[4903]: I0128 18:06:12.958113 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:06:13 crc kubenswrapper[4903]: I0128 18:06:13.545093 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-xmnlc"] Jan 28 18:06:13 crc kubenswrapper[4903]: I0128 18:06:13.549044 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:06:14 crc kubenswrapper[4903]: I0128 18:06:14.511182 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" event={"ID":"f8a23799-e081-42b8-9c63-abc115dfdf94","Type":"ContainerStarted","Data":"d6fe45cd426ad7c470db1c4429bd338bfd233614c4ca9a1346bc8b1b38a57733"} Jan 28 18:06:15 crc kubenswrapper[4903]: I0128 18:06:15.532288 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" event={"ID":"f8a23799-e081-42b8-9c63-abc115dfdf94","Type":"ContainerStarted","Data":"4a479e636feabf00a73f9961e342be138de558fd1d5516f45e25130d4b7aef72"} Jan 28 18:06:15 crc kubenswrapper[4903]: I0128 18:06:15.561365 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" podStartSLOduration=2.306019823 podStartE2EDuration="3.561341697s" podCreationTimestamp="2026-01-28 18:06:12 +0000 UTC" firstStartedPulling="2026-01-28 18:06:13.548846758 +0000 UTC m=+8445.824818269" lastFinishedPulling="2026-01-28 18:06:14.804168632 +0000 UTC m=+8447.080140143" observedRunningTime="2026-01-28 18:06:15.5562942 +0000 UTC m=+8447.832265711" watchObservedRunningTime="2026-01-28 18:06:15.561341697 +0000 UTC m=+8447.837313208" Jan 28 18:06:56 crc kubenswrapper[4903]: I0128 18:06:56.614189 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:06:56 crc kubenswrapper[4903]: I0128 18:06:56.614996 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:07:26 crc kubenswrapper[4903]: I0128 18:07:26.613916 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:07:26 crc kubenswrapper[4903]: I0128 18:07:26.614425 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:07:56 crc kubenswrapper[4903]: I0128 18:07:56.614106 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:07:56 crc kubenswrapper[4903]: I0128 18:07:56.614767 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:07:56 crc kubenswrapper[4903]: I0128 18:07:56.614843 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 18:07:56 crc kubenswrapper[4903]: I0128 18:07:56.615991 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:07:56 crc kubenswrapper[4903]: I0128 18:07:56.616064 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" gracePeriod=600 Jan 28 18:07:56 crc kubenswrapper[4903]: E0128 18:07:56.751879 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:07:57 crc kubenswrapper[4903]: I0128 18:07:57.573003 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" exitCode=0 Jan 28 18:07:57 crc kubenswrapper[4903]: I0128 18:07:57.573332 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836"} Jan 28 18:07:57 crc kubenswrapper[4903]: I0128 18:07:57.573373 4903 scope.go:117] "RemoveContainer" containerID="fbed65eaeb581b02889042335fcad240f34d8f4ae585dde0f6f79715993eda35" Jan 28 18:07:57 crc kubenswrapper[4903]: I0128 18:07:57.575291 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:07:57 crc kubenswrapper[4903]: E0128 18:07:57.576157 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:08:11 crc kubenswrapper[4903]: I0128 18:08:11.413359 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:08:11 crc kubenswrapper[4903]: E0128 18:08:11.414243 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:08:24 crc kubenswrapper[4903]: I0128 18:08:24.413494 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:08:24 crc kubenswrapper[4903]: E0128 18:08:24.414291 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:08:35 crc kubenswrapper[4903]: I0128 18:08:35.420149 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:08:35 crc kubenswrapper[4903]: E0128 18:08:35.421140 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:08:50 crc kubenswrapper[4903]: I0128 18:08:50.414470 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:08:50 crc kubenswrapper[4903]: E0128 18:08:50.415389 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.001942 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-27kv7"] Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.005859 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.017152 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27kv7"] Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.098074 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28zk\" (UniqueName: \"kubernetes.io/projected/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-kube-api-access-p28zk\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.098369 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-utilities\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.098557 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-catalog-content\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.200931 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28zk\" (UniqueName: \"kubernetes.io/projected/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-kube-api-access-p28zk\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.201363 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-utilities\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.201436 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-catalog-content\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.202036 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-utilities\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.202053 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-catalog-content\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.223180 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28zk\" (UniqueName: \"kubernetes.io/projected/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-kube-api-access-p28zk\") pod \"redhat-marketplace-27kv7\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.334703 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:08:57 crc kubenswrapper[4903]: I0128 18:08:57.886863 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27kv7"] Jan 28 18:08:58 crc kubenswrapper[4903]: I0128 18:08:58.188636 4903 generic.go:334] "Generic (PLEG): container finished" podID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerID="fc18ff558f3abbde0dd15a96b8dd2385f3804720da9c1858998b6ebcdd7ab236" exitCode=0 Jan 28 18:08:58 crc kubenswrapper[4903]: I0128 18:08:58.188731 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27kv7" event={"ID":"fd1d464c-e2c2-41b3-8bf2-426f5be2d626","Type":"ContainerDied","Data":"fc18ff558f3abbde0dd15a96b8dd2385f3804720da9c1858998b6ebcdd7ab236"} Jan 28 18:08:58 crc kubenswrapper[4903]: I0128 18:08:58.188932 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27kv7" event={"ID":"fd1d464c-e2c2-41b3-8bf2-426f5be2d626","Type":"ContainerStarted","Data":"1e31e6801ea0cd7da08a631f667d77c613c668b1828650174d650265abecdf66"} Jan 28 18:08:59 crc kubenswrapper[4903]: I0128 18:08:59.199501 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27kv7" event={"ID":"fd1d464c-e2c2-41b3-8bf2-426f5be2d626","Type":"ContainerStarted","Data":"b5b5fe953d3451d87f23a0a3422d7d65ed17516c100b77c4aa1e214190be1963"} Jan 28 18:09:00 crc kubenswrapper[4903]: I0128 18:09:00.212441 4903 generic.go:334] "Generic (PLEG): container finished" podID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerID="b5b5fe953d3451d87f23a0a3422d7d65ed17516c100b77c4aa1e214190be1963" exitCode=0 Jan 28 18:09:00 crc kubenswrapper[4903]: I0128 18:09:00.212590 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27kv7" event={"ID":"fd1d464c-e2c2-41b3-8bf2-426f5be2d626","Type":"ContainerDied","Data":"b5b5fe953d3451d87f23a0a3422d7d65ed17516c100b77c4aa1e214190be1963"} Jan 28 18:09:01 crc kubenswrapper[4903]: I0128 18:09:01.223237 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27kv7" event={"ID":"fd1d464c-e2c2-41b3-8bf2-426f5be2d626","Type":"ContainerStarted","Data":"18948defbddc1d1902bb07d815cccf260461c25041134c64799efdd05af410c0"} Jan 28 18:09:01 crc kubenswrapper[4903]: I0128 18:09:01.244371 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-27kv7" podStartSLOduration=2.7249267919999998 podStartE2EDuration="5.244355459s" podCreationTimestamp="2026-01-28 18:08:56 +0000 UTC" firstStartedPulling="2026-01-28 18:08:58.190755557 +0000 UTC m=+8610.466727068" lastFinishedPulling="2026-01-28 18:09:00.710184214 +0000 UTC m=+8612.986155735" observedRunningTime="2026-01-28 18:09:01.239676961 +0000 UTC m=+8613.515648482" watchObservedRunningTime="2026-01-28 18:09:01.244355459 +0000 UTC m=+8613.520326970" Jan 28 18:09:02 crc kubenswrapper[4903]: I0128 18:09:02.413296 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:09:02 crc kubenswrapper[4903]: E0128 18:09:02.413913 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:09:07 crc kubenswrapper[4903]: I0128 18:09:07.336260 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:09:07 crc kubenswrapper[4903]: I0128 18:09:07.337010 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:09:07 crc kubenswrapper[4903]: I0128 18:09:07.388823 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:09:08 crc kubenswrapper[4903]: I0128 18:09:08.372764 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:09:08 crc kubenswrapper[4903]: I0128 18:09:08.437227 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27kv7"] Jan 28 18:09:10 crc kubenswrapper[4903]: I0128 18:09:10.315857 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-27kv7" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="registry-server" containerID="cri-o://18948defbddc1d1902bb07d815cccf260461c25041134c64799efdd05af410c0" gracePeriod=2 Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.329512 4903 generic.go:334] "Generic (PLEG): container finished" podID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerID="18948defbddc1d1902bb07d815cccf260461c25041134c64799efdd05af410c0" exitCode=0 Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.329654 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27kv7" event={"ID":"fd1d464c-e2c2-41b3-8bf2-426f5be2d626","Type":"ContainerDied","Data":"18948defbddc1d1902bb07d815cccf260461c25041134c64799efdd05af410c0"} Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.478276 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.615497 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-catalog-content\") pod \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.615992 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p28zk\" (UniqueName: \"kubernetes.io/projected/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-kube-api-access-p28zk\") pod \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.616125 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-utilities\") pod \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\" (UID: \"fd1d464c-e2c2-41b3-8bf2-426f5be2d626\") " Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.617488 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-utilities" (OuterVolumeSpecName: "utilities") pod "fd1d464c-e2c2-41b3-8bf2-426f5be2d626" (UID: "fd1d464c-e2c2-41b3-8bf2-426f5be2d626"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.621167 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-kube-api-access-p28zk" (OuterVolumeSpecName: "kube-api-access-p28zk") pod "fd1d464c-e2c2-41b3-8bf2-426f5be2d626" (UID: "fd1d464c-e2c2-41b3-8bf2-426f5be2d626"). InnerVolumeSpecName "kube-api-access-p28zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.642681 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd1d464c-e2c2-41b3-8bf2-426f5be2d626" (UID: "fd1d464c-e2c2-41b3-8bf2-426f5be2d626"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.718328 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.718368 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p28zk\" (UniqueName: \"kubernetes.io/projected/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-kube-api-access-p28zk\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:11 crc kubenswrapper[4903]: I0128 18:09:11.718378 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd1d464c-e2c2-41b3-8bf2-426f5be2d626-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.339746 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27kv7" event={"ID":"fd1d464c-e2c2-41b3-8bf2-426f5be2d626","Type":"ContainerDied","Data":"1e31e6801ea0cd7da08a631f667d77c613c668b1828650174d650265abecdf66"} Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.339782 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27kv7" Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.339801 4903 scope.go:117] "RemoveContainer" containerID="18948defbddc1d1902bb07d815cccf260461c25041134c64799efdd05af410c0" Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.372342 4903 scope.go:117] "RemoveContainer" containerID="b5b5fe953d3451d87f23a0a3422d7d65ed17516c100b77c4aa1e214190be1963" Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.381483 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27kv7"] Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.390919 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-27kv7"] Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.415933 4903 scope.go:117] "RemoveContainer" containerID="fc18ff558f3abbde0dd15a96b8dd2385f3804720da9c1858998b6ebcdd7ab236" Jan 28 18:09:12 crc kubenswrapper[4903]: I0128 18:09:12.427260 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" path="/var/lib/kubelet/pods/fd1d464c-e2c2-41b3-8bf2-426f5be2d626/volumes" Jan 28 18:09:13 crc kubenswrapper[4903]: I0128 18:09:13.413591 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:09:13 crc kubenswrapper[4903]: E0128 18:09:13.414178 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:09:27 crc kubenswrapper[4903]: I0128 18:09:27.414243 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:09:27 crc kubenswrapper[4903]: E0128 18:09:27.415086 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:09:37 crc kubenswrapper[4903]: I0128 18:09:37.587870 4903 generic.go:334] "Generic (PLEG): container finished" podID="f8a23799-e081-42b8-9c63-abc115dfdf94" containerID="4a479e636feabf00a73f9961e342be138de558fd1d5516f45e25130d4b7aef72" exitCode=0 Jan 28 18:09:37 crc kubenswrapper[4903]: I0128 18:09:37.588110 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" event={"ID":"f8a23799-e081-42b8-9c63-abc115dfdf94","Type":"ContainerDied","Data":"4a479e636feabf00a73f9961e342be138de558fd1d5516f45e25130d4b7aef72"} Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.139101 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.267704 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-1\") pod \"f8a23799-e081-42b8-9c63-abc115dfdf94\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.267901 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/f8a23799-e081-42b8-9c63-abc115dfdf94-kube-api-access-45dgc\") pod \"f8a23799-e081-42b8-9c63-abc115dfdf94\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.267954 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-inventory\") pod \"f8a23799-e081-42b8-9c63-abc115dfdf94\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.268058 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-0\") pod \"f8a23799-e081-42b8-9c63-abc115dfdf94\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.268115 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-2\") pod \"f8a23799-e081-42b8-9c63-abc115dfdf94\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.268152 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-telemetry-combined-ca-bundle\") pod \"f8a23799-e081-42b8-9c63-abc115dfdf94\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.268277 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ssh-key-openstack-cell1\") pod \"f8a23799-e081-42b8-9c63-abc115dfdf94\" (UID: \"f8a23799-e081-42b8-9c63-abc115dfdf94\") " Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.273905 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f8a23799-e081-42b8-9c63-abc115dfdf94" (UID: "f8a23799-e081-42b8-9c63-abc115dfdf94"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.280446 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a23799-e081-42b8-9c63-abc115dfdf94-kube-api-access-45dgc" (OuterVolumeSpecName: "kube-api-access-45dgc") pod "f8a23799-e081-42b8-9c63-abc115dfdf94" (UID: "f8a23799-e081-42b8-9c63-abc115dfdf94"). InnerVolumeSpecName "kube-api-access-45dgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.297144 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "f8a23799-e081-42b8-9c63-abc115dfdf94" (UID: "f8a23799-e081-42b8-9c63-abc115dfdf94"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.297232 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "f8a23799-e081-42b8-9c63-abc115dfdf94" (UID: "f8a23799-e081-42b8-9c63-abc115dfdf94"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.304750 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "f8a23799-e081-42b8-9c63-abc115dfdf94" (UID: "f8a23799-e081-42b8-9c63-abc115dfdf94"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.309895 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f8a23799-e081-42b8-9c63-abc115dfdf94" (UID: "f8a23799-e081-42b8-9c63-abc115dfdf94"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.324737 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-inventory" (OuterVolumeSpecName: "inventory") pod "f8a23799-e081-42b8-9c63-abc115dfdf94" (UID: "f8a23799-e081-42b8-9c63-abc115dfdf94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.372703 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.372838 4903 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.372857 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/f8a23799-e081-42b8-9c63-abc115dfdf94-kube-api-access-45dgc\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.372870 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.372882 4903 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.372894 4903 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.372907 4903 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a23799-e081-42b8-9c63-abc115dfdf94-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.609264 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.611657 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-xmnlc" event={"ID":"f8a23799-e081-42b8-9c63-abc115dfdf94","Type":"ContainerDied","Data":"d6fe45cd426ad7c470db1c4429bd338bfd233614c4ca9a1346bc8b1b38a57733"} Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.611755 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6fe45cd426ad7c470db1c4429bd338bfd233614c4ca9a1346bc8b1b38a57733" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.707196 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-t8blf"] Jan 28 18:09:39 crc kubenswrapper[4903]: E0128 18:09:39.707712 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="extract-utilities" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.707734 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="extract-utilities" Jan 28 18:09:39 crc kubenswrapper[4903]: E0128 18:09:39.707761 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="registry-server" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.707769 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="registry-server" Jan 28 18:09:39 crc kubenswrapper[4903]: E0128 18:09:39.707780 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a23799-e081-42b8-9c63-abc115dfdf94" containerName="telemetry-openstack-openstack-cell1" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.707789 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a23799-e081-42b8-9c63-abc115dfdf94" containerName="telemetry-openstack-openstack-cell1" Jan 28 18:09:39 crc kubenswrapper[4903]: E0128 18:09:39.707805 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="extract-content" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.707813 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="extract-content" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.708066 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a23799-e081-42b8-9c63-abc115dfdf94" containerName="telemetry-openstack-openstack-cell1" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.708104 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1d464c-e2c2-41b3-8bf2-426f5be2d626" containerName="registry-server" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.708864 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.713158 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.713328 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.713590 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.713628 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.713839 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.724096 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-t8blf"] Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.884863 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.885659 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.885771 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.885802 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zvd\" (UniqueName: \"kubernetes.io/projected/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-kube-api-access-88zvd\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.885837 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.987733 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.987830 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.988081 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.988156 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88zvd\" (UniqueName: \"kubernetes.io/projected/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-kube-api-access-88zvd\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.988247 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.993970 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:39 crc kubenswrapper[4903]: I0128 18:09:39.997371 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:40 crc kubenswrapper[4903]: I0128 18:09:40.000196 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:40 crc kubenswrapper[4903]: I0128 18:09:40.001123 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:40 crc kubenswrapper[4903]: I0128 18:09:40.015374 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88zvd\" (UniqueName: \"kubernetes.io/projected/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-kube-api-access-88zvd\") pod \"neutron-sriov-openstack-openstack-cell1-t8blf\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:40 crc kubenswrapper[4903]: I0128 18:09:40.032090 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:09:40 crc kubenswrapper[4903]: I0128 18:09:40.611339 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-t8blf"] Jan 28 18:09:41 crc kubenswrapper[4903]: I0128 18:09:41.414160 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:09:41 crc kubenswrapper[4903]: E0128 18:09:41.414763 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:09:41 crc kubenswrapper[4903]: I0128 18:09:41.627625 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" event={"ID":"c5fa40e5-68b9-4c8d-a041-af1dd70700f2","Type":"ContainerStarted","Data":"ddb79b481b0ddbee03b534eedce4e43c448a8e7b8c4f24a82e6afc8be5ba82c3"} Jan 28 18:09:41 crc kubenswrapper[4903]: I0128 18:09:41.627672 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" event={"ID":"c5fa40e5-68b9-4c8d-a041-af1dd70700f2","Type":"ContainerStarted","Data":"eef541045f04f61abf6b90537a53a726322de94850a419cfd039a0e40dfc3456"} Jan 28 18:09:41 crc kubenswrapper[4903]: I0128 18:09:41.662515 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" podStartSLOduration=2.189984536 podStartE2EDuration="2.662492053s" podCreationTimestamp="2026-01-28 18:09:39 +0000 UTC" firstStartedPulling="2026-01-28 18:09:40.619407797 +0000 UTC m=+8652.895379308" lastFinishedPulling="2026-01-28 18:09:41.091915314 +0000 UTC m=+8653.367886825" observedRunningTime="2026-01-28 18:09:41.648187976 +0000 UTC m=+8653.924159487" watchObservedRunningTime="2026-01-28 18:09:41.662492053 +0000 UTC m=+8653.938463584" Jan 28 18:09:55 crc kubenswrapper[4903]: I0128 18:09:55.413432 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:09:55 crc kubenswrapper[4903]: E0128 18:09:55.414367 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:10:06 crc kubenswrapper[4903]: I0128 18:10:06.413967 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:10:06 crc kubenswrapper[4903]: E0128 18:10:06.415253 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:10:18 crc kubenswrapper[4903]: I0128 18:10:18.421277 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:10:18 crc kubenswrapper[4903]: E0128 18:10:18.422220 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:10:32 crc kubenswrapper[4903]: I0128 18:10:32.413802 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:10:32 crc kubenswrapper[4903]: E0128 18:10:32.414636 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:10:34 crc kubenswrapper[4903]: I0128 18:10:34.125363 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rngpd"] Jan 28 18:10:34 crc kubenswrapper[4903]: I0128 18:10:34.128760 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:34 crc kubenswrapper[4903]: I0128 18:10:34.138369 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rngpd"] Jan 28 18:10:34 crc kubenswrapper[4903]: I0128 18:10:34.466135 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-catalog-content\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:34 crc kubenswrapper[4903]: I0128 18:10:34.466192 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7ps\" (UniqueName: \"kubernetes.io/projected/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-kube-api-access-2n7ps\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:34 crc kubenswrapper[4903]: I0128 18:10:34.466296 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-utilities\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:35 crc kubenswrapper[4903]: I0128 18:10:35.096338 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-catalog-content\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:35 crc kubenswrapper[4903]: I0128 18:10:35.096380 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n7ps\" (UniqueName: \"kubernetes.io/projected/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-kube-api-access-2n7ps\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:35 crc kubenswrapper[4903]: I0128 18:10:35.096501 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-utilities\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:35 crc kubenswrapper[4903]: I0128 18:10:35.100152 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-utilities\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:35 crc kubenswrapper[4903]: I0128 18:10:35.101340 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-catalog-content\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:35 crc kubenswrapper[4903]: I0128 18:10:35.388389 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n7ps\" (UniqueName: \"kubernetes.io/projected/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-kube-api-access-2n7ps\") pod \"certified-operators-rngpd\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:35 crc kubenswrapper[4903]: I0128 18:10:35.654417 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:36 crc kubenswrapper[4903]: I0128 18:10:36.228943 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rngpd"] Jan 28 18:10:37 crc kubenswrapper[4903]: I0128 18:10:37.123864 4903 generic.go:334] "Generic (PLEG): container finished" podID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerID="930c725ae5d29060aca74a48a1adc25efa445cf9a382796a898ea66dbd921d84" exitCode=0 Jan 28 18:10:37 crc kubenswrapper[4903]: I0128 18:10:37.123914 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rngpd" event={"ID":"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc","Type":"ContainerDied","Data":"930c725ae5d29060aca74a48a1adc25efa445cf9a382796a898ea66dbd921d84"} Jan 28 18:10:37 crc kubenswrapper[4903]: I0128 18:10:37.124166 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rngpd" event={"ID":"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc","Type":"ContainerStarted","Data":"21e8615098b33b4dc81c2907df68f2c03d397e21dad59ebe38c1355f53c3d671"} Jan 28 18:10:39 crc kubenswrapper[4903]: I0128 18:10:39.147369 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rngpd" event={"ID":"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc","Type":"ContainerStarted","Data":"6b1d70e4b8ef1698ec01bbc69f26b9ec358c36f1b9150d9fcf9662f4adb3c0af"} Jan 28 18:10:40 crc kubenswrapper[4903]: I0128 18:10:40.159074 4903 generic.go:334] "Generic (PLEG): container finished" podID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerID="6b1d70e4b8ef1698ec01bbc69f26b9ec358c36f1b9150d9fcf9662f4adb3c0af" exitCode=0 Jan 28 18:10:40 crc kubenswrapper[4903]: I0128 18:10:40.159171 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rngpd" event={"ID":"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc","Type":"ContainerDied","Data":"6b1d70e4b8ef1698ec01bbc69f26b9ec358c36f1b9150d9fcf9662f4adb3c0af"} Jan 28 18:10:42 crc kubenswrapper[4903]: I0128 18:10:42.183006 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rngpd" event={"ID":"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc","Type":"ContainerStarted","Data":"23ccc255740d4317a8cae56b70f564972e46a2a6a4d2d95de26b57836d1ab281"} Jan 28 18:10:42 crc kubenswrapper[4903]: I0128 18:10:42.216959 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rngpd" podStartSLOduration=3.602636623 podStartE2EDuration="8.216930958s" podCreationTimestamp="2026-01-28 18:10:34 +0000 UTC" firstStartedPulling="2026-01-28 18:10:37.132977773 +0000 UTC m=+8709.408949284" lastFinishedPulling="2026-01-28 18:10:41.747272108 +0000 UTC m=+8714.023243619" observedRunningTime="2026-01-28 18:10:42.209033154 +0000 UTC m=+8714.485004665" watchObservedRunningTime="2026-01-28 18:10:42.216930958 +0000 UTC m=+8714.492902469" Jan 28 18:10:44 crc kubenswrapper[4903]: I0128 18:10:44.200564 4903 generic.go:334] "Generic (PLEG): container finished" podID="c5fa40e5-68b9-4c8d-a041-af1dd70700f2" containerID="ddb79b481b0ddbee03b534eedce4e43c448a8e7b8c4f24a82e6afc8be5ba82c3" exitCode=0 Jan 28 18:10:44 crc kubenswrapper[4903]: I0128 18:10:44.200643 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" event={"ID":"c5fa40e5-68b9-4c8d-a041-af1dd70700f2","Type":"ContainerDied","Data":"ddb79b481b0ddbee03b534eedce4e43c448a8e7b8c4f24a82e6afc8be5ba82c3"} Jan 28 18:10:44 crc kubenswrapper[4903]: I0128 18:10:44.888088 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lknsv"] Jan 28 18:10:44 crc kubenswrapper[4903]: I0128 18:10:44.891572 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:44 crc kubenswrapper[4903]: I0128 18:10:44.900168 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lknsv"] Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.023400 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-catalog-content\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.023766 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-utilities\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.024103 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x994\" (UniqueName: \"kubernetes.io/projected/277bb399-4012-401f-b809-adf4d56fcfbf-kube-api-access-4x994\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.126361 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-utilities\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.126552 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x994\" (UniqueName: \"kubernetes.io/projected/277bb399-4012-401f-b809-adf4d56fcfbf-kube-api-access-4x994\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.126667 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-catalog-content\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.126998 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-utilities\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.127144 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-catalog-content\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.144845 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x994\" (UniqueName: \"kubernetes.io/projected/277bb399-4012-401f-b809-adf4d56fcfbf-kube-api-access-4x994\") pod \"community-operators-lknsv\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.213825 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.654792 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.656255 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:45 crc kubenswrapper[4903]: I0128 18:10:45.780277 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lknsv"] Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.152754 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.220681 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" event={"ID":"c5fa40e5-68b9-4c8d-a041-af1dd70700f2","Type":"ContainerDied","Data":"eef541045f04f61abf6b90537a53a726322de94850a419cfd039a0e40dfc3456"} Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.220973 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef541045f04f61abf6b90537a53a726322de94850a419cfd039a0e40dfc3456" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.221728 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lknsv" event={"ID":"277bb399-4012-401f-b809-adf4d56fcfbf","Type":"ContainerStarted","Data":"95e66842a9b2cd8586f342228d292663f0b23ab644d2229ee8f92d8d459c1433"} Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.312149 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.457930 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88zvd\" (UniqueName: \"kubernetes.io/projected/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-kube-api-access-88zvd\") pod \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.458034 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-inventory\") pod \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.458141 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-ssh-key-openstack-cell1\") pod \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.458168 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-combined-ca-bundle\") pod \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.458207 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-agent-neutron-config-0\") pod \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\" (UID: \"c5fa40e5-68b9-4c8d-a041-af1dd70700f2\") " Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.464047 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "c5fa40e5-68b9-4c8d-a041-af1dd70700f2" (UID: "c5fa40e5-68b9-4c8d-a041-af1dd70700f2"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.464103 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-kube-api-access-88zvd" (OuterVolumeSpecName: "kube-api-access-88zvd") pod "c5fa40e5-68b9-4c8d-a041-af1dd70700f2" (UID: "c5fa40e5-68b9-4c8d-a041-af1dd70700f2"). InnerVolumeSpecName "kube-api-access-88zvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.493116 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "c5fa40e5-68b9-4c8d-a041-af1dd70700f2" (UID: "c5fa40e5-68b9-4c8d-a041-af1dd70700f2"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.493661 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c5fa40e5-68b9-4c8d-a041-af1dd70700f2" (UID: "c5fa40e5-68b9-4c8d-a041-af1dd70700f2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.500817 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-inventory" (OuterVolumeSpecName: "inventory") pod "c5fa40e5-68b9-4c8d-a041-af1dd70700f2" (UID: "c5fa40e5-68b9-4c8d-a041-af1dd70700f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.571609 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.571678 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88zvd\" (UniqueName: \"kubernetes.io/projected/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-kube-api-access-88zvd\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.571692 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.571738 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:46 crc kubenswrapper[4903]: I0128 18:10:46.571753 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5fa40e5-68b9-4c8d-a041-af1dd70700f2-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.234005 4903 generic.go:334] "Generic (PLEG): container finished" podID="277bb399-4012-401f-b809-adf4d56fcfbf" containerID="9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0" exitCode=0 Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.235705 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lknsv" event={"ID":"277bb399-4012-401f-b809-adf4d56fcfbf","Type":"ContainerDied","Data":"9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0"} Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.235797 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-t8blf" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.284295 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.414227 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:10:47 crc kubenswrapper[4903]: E0128 18:10:47.414647 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.423882 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v"] Jan 28 18:10:47 crc kubenswrapper[4903]: E0128 18:10:47.424700 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5fa40e5-68b9-4c8d-a041-af1dd70700f2" containerName="neutron-sriov-openstack-openstack-cell1" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.424840 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5fa40e5-68b9-4c8d-a041-af1dd70700f2" containerName="neutron-sriov-openstack-openstack-cell1" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.425171 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5fa40e5-68b9-4c8d-a041-af1dd70700f2" containerName="neutron-sriov-openstack-openstack-cell1" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.426156 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.428753 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.428777 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.428849 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.429245 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.429439 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.438462 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v"] Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.592983 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.593294 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.593471 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.593617 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.593651 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxww\" (UniqueName: \"kubernetes.io/projected/8b21187c-e391-4715-be41-c2b56d925c85-kube-api-access-cwxww\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.695709 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.696037 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.696173 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.696288 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.696376 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxww\" (UniqueName: \"kubernetes.io/projected/8b21187c-e391-4715-be41-c2b56d925c85-kube-api-access-cwxww\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.705059 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.705345 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.705370 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.705782 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.716250 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxww\" (UniqueName: \"kubernetes.io/projected/8b21187c-e391-4715-be41-c2b56d925c85-kube-api-access-cwxww\") pod \"neutron-dhcp-openstack-openstack-cell1-tqb4v\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:47 crc kubenswrapper[4903]: I0128 18:10:47.756287 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:10:48 crc kubenswrapper[4903]: I0128 18:10:48.303826 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v"] Jan 28 18:10:48 crc kubenswrapper[4903]: I0128 18:10:48.464821 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rngpd"] Jan 28 18:10:49 crc kubenswrapper[4903]: I0128 18:10:49.253434 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" event={"ID":"8b21187c-e391-4715-be41-c2b56d925c85","Type":"ContainerStarted","Data":"8338480aff44f5c375fb2a52ed3a098342c7edfd98a0f93017eb4377904a2d4c"} Jan 28 18:10:49 crc kubenswrapper[4903]: I0128 18:10:49.253616 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rngpd" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="registry-server" containerID="cri-o://23ccc255740d4317a8cae56b70f564972e46a2a6a4d2d95de26b57836d1ab281" gracePeriod=2 Jan 28 18:10:50 crc kubenswrapper[4903]: I0128 18:10:50.269304 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lknsv" event={"ID":"277bb399-4012-401f-b809-adf4d56fcfbf","Type":"ContainerStarted","Data":"fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d"} Jan 28 18:10:53 crc kubenswrapper[4903]: I0128 18:10:53.308137 4903 generic.go:334] "Generic (PLEG): container finished" podID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerID="23ccc255740d4317a8cae56b70f564972e46a2a6a4d2d95de26b57836d1ab281" exitCode=0 Jan 28 18:10:53 crc kubenswrapper[4903]: I0128 18:10:53.308223 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rngpd" event={"ID":"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc","Type":"ContainerDied","Data":"23ccc255740d4317a8cae56b70f564972e46a2a6a4d2d95de26b57836d1ab281"} Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.709468 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.767071 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-catalog-content\") pod \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.767264 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-utilities\") pod \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.767381 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n7ps\" (UniqueName: \"kubernetes.io/projected/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-kube-api-access-2n7ps\") pod \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\" (UID: \"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc\") " Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.769047 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-utilities" (OuterVolumeSpecName: "utilities") pod "bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" (UID: "bd96bfca-cc72-44e6-8f7d-1d468e61c6fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.772079 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-kube-api-access-2n7ps" (OuterVolumeSpecName: "kube-api-access-2n7ps") pod "bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" (UID: "bd96bfca-cc72-44e6-8f7d-1d468e61c6fc"). InnerVolumeSpecName "kube-api-access-2n7ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.772723 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.772762 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n7ps\" (UniqueName: \"kubernetes.io/projected/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-kube-api-access-2n7ps\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.813347 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" (UID: "bd96bfca-cc72-44e6-8f7d-1d468e61c6fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:10:54 crc kubenswrapper[4903]: I0128 18:10:54.874309 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.345568 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" event={"ID":"8b21187c-e391-4715-be41-c2b56d925c85","Type":"ContainerStarted","Data":"a550cf57197c946f8e848813f40c139ebb07c708dcda96a422641d6b2021fd94"} Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.349442 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rngpd" event={"ID":"bd96bfca-cc72-44e6-8f7d-1d468e61c6fc","Type":"ContainerDied","Data":"21e8615098b33b4dc81c2907df68f2c03d397e21dad59ebe38c1355f53c3d671"} Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.349512 4903 scope.go:117] "RemoveContainer" containerID="23ccc255740d4317a8cae56b70f564972e46a2a6a4d2d95de26b57836d1ab281" Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.349756 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rngpd" Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.365828 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" podStartSLOduration=2.830215558 podStartE2EDuration="8.365771391s" podCreationTimestamp="2026-01-28 18:10:47 +0000 UTC" firstStartedPulling="2026-01-28 18:10:48.691578137 +0000 UTC m=+8720.967549648" lastFinishedPulling="2026-01-28 18:10:54.22713396 +0000 UTC m=+8726.503105481" observedRunningTime="2026-01-28 18:10:55.363341185 +0000 UTC m=+8727.639312706" watchObservedRunningTime="2026-01-28 18:10:55.365771391 +0000 UTC m=+8727.641742902" Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.395867 4903 scope.go:117] "RemoveContainer" containerID="6b1d70e4b8ef1698ec01bbc69f26b9ec358c36f1b9150d9fcf9662f4adb3c0af" Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.398652 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rngpd"] Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.410179 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rngpd"] Jan 28 18:10:55 crc kubenswrapper[4903]: I0128 18:10:55.418312 4903 scope.go:117] "RemoveContainer" containerID="930c725ae5d29060aca74a48a1adc25efa445cf9a382796a898ea66dbd921d84" Jan 28 18:10:56 crc kubenswrapper[4903]: I0128 18:10:56.428139 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" path="/var/lib/kubelet/pods/bd96bfca-cc72-44e6-8f7d-1d468e61c6fc/volumes" Jan 28 18:11:01 crc kubenswrapper[4903]: I0128 18:11:01.416407 4903 generic.go:334] "Generic (PLEG): container finished" podID="277bb399-4012-401f-b809-adf4d56fcfbf" containerID="fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d" exitCode=0 Jan 28 18:11:01 crc kubenswrapper[4903]: I0128 18:11:01.416487 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lknsv" event={"ID":"277bb399-4012-401f-b809-adf4d56fcfbf","Type":"ContainerDied","Data":"fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d"} Jan 28 18:11:02 crc kubenswrapper[4903]: I0128 18:11:02.414108 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:11:02 crc kubenswrapper[4903]: E0128 18:11:02.414788 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:11:03 crc kubenswrapper[4903]: I0128 18:11:03.440367 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lknsv" event={"ID":"277bb399-4012-401f-b809-adf4d56fcfbf","Type":"ContainerStarted","Data":"cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048"} Jan 28 18:11:03 crc kubenswrapper[4903]: I0128 18:11:03.470006 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lknsv" podStartSLOduration=4.306318749 podStartE2EDuration="19.469961954s" podCreationTimestamp="2026-01-28 18:10:44 +0000 UTC" firstStartedPulling="2026-01-28 18:10:47.237549548 +0000 UTC m=+8719.513521059" lastFinishedPulling="2026-01-28 18:11:02.401192753 +0000 UTC m=+8734.677164264" observedRunningTime="2026-01-28 18:11:03.461273128 +0000 UTC m=+8735.737244649" watchObservedRunningTime="2026-01-28 18:11:03.469961954 +0000 UTC m=+8735.745933465" Jan 28 18:11:05 crc kubenswrapper[4903]: I0128 18:11:05.214445 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:11:05 crc kubenswrapper[4903]: I0128 18:11:05.214809 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:11:06 crc kubenswrapper[4903]: I0128 18:11:06.267108 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lknsv" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="registry-server" probeResult="failure" output=< Jan 28 18:11:06 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 18:11:06 crc kubenswrapper[4903]: > Jan 28 18:11:14 crc kubenswrapper[4903]: I0128 18:11:14.413699 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:11:14 crc kubenswrapper[4903]: E0128 18:11:14.414440 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:11:15 crc kubenswrapper[4903]: I0128 18:11:15.259047 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:11:15 crc kubenswrapper[4903]: I0128 18:11:15.319602 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:11:16 crc kubenswrapper[4903]: I0128 18:11:16.086095 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lknsv"] Jan 28 18:11:16 crc kubenswrapper[4903]: I0128 18:11:16.591303 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lknsv" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="registry-server" containerID="cri-o://cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048" gracePeriod=2 Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.122591 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.260568 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x994\" (UniqueName: \"kubernetes.io/projected/277bb399-4012-401f-b809-adf4d56fcfbf-kube-api-access-4x994\") pod \"277bb399-4012-401f-b809-adf4d56fcfbf\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.260648 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-catalog-content\") pod \"277bb399-4012-401f-b809-adf4d56fcfbf\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.260791 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-utilities\") pod \"277bb399-4012-401f-b809-adf4d56fcfbf\" (UID: \"277bb399-4012-401f-b809-adf4d56fcfbf\") " Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.261772 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-utilities" (OuterVolumeSpecName: "utilities") pod "277bb399-4012-401f-b809-adf4d56fcfbf" (UID: "277bb399-4012-401f-b809-adf4d56fcfbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.267006 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277bb399-4012-401f-b809-adf4d56fcfbf-kube-api-access-4x994" (OuterVolumeSpecName: "kube-api-access-4x994") pod "277bb399-4012-401f-b809-adf4d56fcfbf" (UID: "277bb399-4012-401f-b809-adf4d56fcfbf"). InnerVolumeSpecName "kube-api-access-4x994". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.327457 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "277bb399-4012-401f-b809-adf4d56fcfbf" (UID: "277bb399-4012-401f-b809-adf4d56fcfbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.365144 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.365357 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x994\" (UniqueName: \"kubernetes.io/projected/277bb399-4012-401f-b809-adf4d56fcfbf-kube-api-access-4x994\") on node \"crc\" DevicePath \"\"" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.365460 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277bb399-4012-401f-b809-adf4d56fcfbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.608791 4903 generic.go:334] "Generic (PLEG): container finished" podID="277bb399-4012-401f-b809-adf4d56fcfbf" containerID="cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048" exitCode=0 Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.608863 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lknsv" event={"ID":"277bb399-4012-401f-b809-adf4d56fcfbf","Type":"ContainerDied","Data":"cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048"} Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.608974 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lknsv" event={"ID":"277bb399-4012-401f-b809-adf4d56fcfbf","Type":"ContainerDied","Data":"95e66842a9b2cd8586f342228d292663f0b23ab644d2229ee8f92d8d459c1433"} Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.609003 4903 scope.go:117] "RemoveContainer" containerID="cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.609330 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lknsv" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.629105 4903 scope.go:117] "RemoveContainer" containerID="fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.655340 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lknsv"] Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.660829 4903 scope.go:117] "RemoveContainer" containerID="9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.663793 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lknsv"] Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.705619 4903 scope.go:117] "RemoveContainer" containerID="cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048" Jan 28 18:11:17 crc kubenswrapper[4903]: E0128 18:11:17.706150 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048\": container with ID starting with cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048 not found: ID does not exist" containerID="cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.706211 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048"} err="failed to get container status \"cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048\": rpc error: code = NotFound desc = could not find container \"cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048\": container with ID starting with cb89c827de5bf0eb542180cae4a19880afbd60b4fed24e8dc6b81a6c85c42048 not found: ID does not exist" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.706247 4903 scope.go:117] "RemoveContainer" containerID="fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d" Jan 28 18:11:17 crc kubenswrapper[4903]: E0128 18:11:17.706740 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d\": container with ID starting with fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d not found: ID does not exist" containerID="fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.706772 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d"} err="failed to get container status \"fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d\": rpc error: code = NotFound desc = could not find container \"fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d\": container with ID starting with fc670557b78d49961a87832bf69dd5032fdc5455475c0d3d6f552763f7c9594d not found: ID does not exist" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.706791 4903 scope.go:117] "RemoveContainer" containerID="9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0" Jan 28 18:11:17 crc kubenswrapper[4903]: E0128 18:11:17.707340 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0\": container with ID starting with 9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0 not found: ID does not exist" containerID="9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0" Jan 28 18:11:17 crc kubenswrapper[4903]: I0128 18:11:17.707388 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0"} err="failed to get container status \"9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0\": rpc error: code = NotFound desc = could not find container \"9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0\": container with ID starting with 9079cfe940dd7b410a00d1d6570428cbb4441b5234d5aac1216f9d9da26b4bc0 not found: ID does not exist" Jan 28 18:11:18 crc kubenswrapper[4903]: I0128 18:11:18.429245 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" path="/var/lib/kubelet/pods/277bb399-4012-401f-b809-adf4d56fcfbf/volumes" Jan 28 18:11:27 crc kubenswrapper[4903]: I0128 18:11:27.413419 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:11:27 crc kubenswrapper[4903]: E0128 18:11:27.414371 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:11:40 crc kubenswrapper[4903]: I0128 18:11:40.414181 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:11:40 crc kubenswrapper[4903]: E0128 18:11:40.414995 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:11:54 crc kubenswrapper[4903]: I0128 18:11:54.414417 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:11:54 crc kubenswrapper[4903]: E0128 18:11:54.415219 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:12:09 crc kubenswrapper[4903]: I0128 18:12:09.413349 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:12:09 crc kubenswrapper[4903]: E0128 18:12:09.414211 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:12:19 crc kubenswrapper[4903]: I0128 18:12:19.196911 4903 generic.go:334] "Generic (PLEG): container finished" podID="8b21187c-e391-4715-be41-c2b56d925c85" containerID="a550cf57197c946f8e848813f40c139ebb07c708dcda96a422641d6b2021fd94" exitCode=0 Jan 28 18:12:19 crc kubenswrapper[4903]: I0128 18:12:19.197026 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" event={"ID":"8b21187c-e391-4715-be41-c2b56d925c85","Type":"ContainerDied","Data":"a550cf57197c946f8e848813f40c139ebb07c708dcda96a422641d6b2021fd94"} Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.033131 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.184263 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwxww\" (UniqueName: \"kubernetes.io/projected/8b21187c-e391-4715-be41-c2b56d925c85-kube-api-access-cwxww\") pod \"8b21187c-e391-4715-be41-c2b56d925c85\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.184345 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-inventory\") pod \"8b21187c-e391-4715-be41-c2b56d925c85\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.184467 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-agent-neutron-config-0\") pod \"8b21187c-e391-4715-be41-c2b56d925c85\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.184521 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-combined-ca-bundle\") pod \"8b21187c-e391-4715-be41-c2b56d925c85\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.184664 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-ssh-key-openstack-cell1\") pod \"8b21187c-e391-4715-be41-c2b56d925c85\" (UID: \"8b21187c-e391-4715-be41-c2b56d925c85\") " Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.190577 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b21187c-e391-4715-be41-c2b56d925c85-kube-api-access-cwxww" (OuterVolumeSpecName: "kube-api-access-cwxww") pod "8b21187c-e391-4715-be41-c2b56d925c85" (UID: "8b21187c-e391-4715-be41-c2b56d925c85"). InnerVolumeSpecName "kube-api-access-cwxww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.190601 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "8b21187c-e391-4715-be41-c2b56d925c85" (UID: "8b21187c-e391-4715-be41-c2b56d925c85"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.214024 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-inventory" (OuterVolumeSpecName: "inventory") pod "8b21187c-e391-4715-be41-c2b56d925c85" (UID: "8b21187c-e391-4715-be41-c2b56d925c85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.217443 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8b21187c-e391-4715-be41-c2b56d925c85" (UID: "8b21187c-e391-4715-be41-c2b56d925c85"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.226840 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" event={"ID":"8b21187c-e391-4715-be41-c2b56d925c85","Type":"ContainerDied","Data":"8338480aff44f5c375fb2a52ed3a098342c7edfd98a0f93017eb4377904a2d4c"} Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.226883 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8338480aff44f5c375fb2a52ed3a098342c7edfd98a0f93017eb4377904a2d4c" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.226946 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-tqb4v" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.227299 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "8b21187c-e391-4715-be41-c2b56d925c85" (UID: "8b21187c-e391-4715-be41-c2b56d925c85"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.287743 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.287779 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.287793 4903 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.287807 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8b21187c-e391-4715-be41-c2b56d925c85-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.287817 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwxww\" (UniqueName: \"kubernetes.io/projected/8b21187c-e391-4715-be41-c2b56d925c85-kube-api-access-cwxww\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:21 crc kubenswrapper[4903]: I0128 18:12:21.413842 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:12:21 crc kubenswrapper[4903]: E0128 18:12:21.414373 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:12:34 crc kubenswrapper[4903]: I0128 18:12:34.414098 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:12:34 crc kubenswrapper[4903]: E0128 18:12:34.414860 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:12:45 crc kubenswrapper[4903]: I0128 18:12:45.414098 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:12:45 crc kubenswrapper[4903]: E0128 18:12:45.414872 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:12:48 crc kubenswrapper[4903]: I0128 18:12:48.432332 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:12:48 crc kubenswrapper[4903]: I0128 18:12:48.433107 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="ad5e0d41-5311-4d00-b9e8-69915bf46fd9" containerName="nova-cell0-conductor-conductor" containerID="cri-o://793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" gracePeriod=30 Jan 28 18:12:48 crc kubenswrapper[4903]: I0128 18:12:48.928785 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:12:48 crc kubenswrapper[4903]: I0128 18:12:48.929582 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="9a06e697-989a-4142-b291-83e72a63b996" containerName="nova-cell1-conductor-conductor" containerID="cri-o://da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" gracePeriod=30 Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.100651 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.100919 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-log" containerID="cri-o://aba26193ab75a927a7f1998623cec92597d372c49578d8cc33d5e37ea6f0b0ce" gracePeriod=30 Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.101093 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-api" containerID="cri-o://82fea43a61dc69dea5960abf4d7ddf92fde43e925d930d2f5160a1444485723d" gracePeriod=30 Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.123133 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.123347 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ab3464ba-e769-4e18-a7ff-4c752456a9ee" containerName="nova-scheduler-scheduler" containerID="cri-o://7a929a3c1472352096fed7e06d804435e586d826e013870d349fbcd417bf7df1" gracePeriod=30 Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.151301 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.151524 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-log" containerID="cri-o://909ff26c97bfd0b3061edc9b87e9f176717635cd4a9cf3a1bdd9edf777821d6f" gracePeriod=30 Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.151658 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-metadata" containerID="cri-o://3d09968fc0f58cdd94e16d0b629d255e836f2b598eefceb170d29cecdebe2569" gracePeriod=30 Jan 28 18:12:49 crc kubenswrapper[4903]: E0128 18:12:49.445630 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:12:49 crc kubenswrapper[4903]: E0128 18:12:49.453981 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:12:49 crc kubenswrapper[4903]: E0128 18:12:49.455410 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:12:49 crc kubenswrapper[4903]: E0128 18:12:49.455482 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="9a06e697-989a-4142-b291-83e72a63b996" containerName="nova-cell1-conductor-conductor" Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.540865 4903 generic.go:334] "Generic (PLEG): container finished" podID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerID="aba26193ab75a927a7f1998623cec92597d372c49578d8cc33d5e37ea6f0b0ce" exitCode=143 Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.540941 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5","Type":"ContainerDied","Data":"aba26193ab75a927a7f1998623cec92597d372c49578d8cc33d5e37ea6f0b0ce"} Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.543886 4903 generic.go:334] "Generic (PLEG): container finished" podID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerID="909ff26c97bfd0b3061edc9b87e9f176717635cd4a9cf3a1bdd9edf777821d6f" exitCode=143 Jan 28 18:12:49 crc kubenswrapper[4903]: I0128 18:12:49.543919 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e","Type":"ContainerDied","Data":"909ff26c97bfd0b3061edc9b87e9f176717635cd4a9cf3a1bdd9edf777821d6f"} Jan 28 18:12:50 crc kubenswrapper[4903]: E0128 18:12:50.126131 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:12:50 crc kubenswrapper[4903]: E0128 18:12:50.127715 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:12:50 crc kubenswrapper[4903]: E0128 18:12:50.128854 4903 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 18:12:50 crc kubenswrapper[4903]: E0128 18:12:50.128898 4903 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="ad5e0d41-5311-4d00-b9e8-69915bf46fd9" containerName="nova-cell0-conductor-conductor" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.222469 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.341032 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-config-data\") pod \"9a06e697-989a-4142-b291-83e72a63b996\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.341118 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhdpg\" (UniqueName: \"kubernetes.io/projected/9a06e697-989a-4142-b291-83e72a63b996-kube-api-access-nhdpg\") pod \"9a06e697-989a-4142-b291-83e72a63b996\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.341163 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-combined-ca-bundle\") pod \"9a06e697-989a-4142-b291-83e72a63b996\" (UID: \"9a06e697-989a-4142-b291-83e72a63b996\") " Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.347469 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a06e697-989a-4142-b291-83e72a63b996-kube-api-access-nhdpg" (OuterVolumeSpecName: "kube-api-access-nhdpg") pod "9a06e697-989a-4142-b291-83e72a63b996" (UID: "9a06e697-989a-4142-b291-83e72a63b996"). InnerVolumeSpecName "kube-api-access-nhdpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.372213 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a06e697-989a-4142-b291-83e72a63b996" (UID: "9a06e697-989a-4142-b291-83e72a63b996"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.385365 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-config-data" (OuterVolumeSpecName: "config-data") pod "9a06e697-989a-4142-b291-83e72a63b996" (UID: "9a06e697-989a-4142-b291-83e72a63b996"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.444351 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.444389 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhdpg\" (UniqueName: \"kubernetes.io/projected/9a06e697-989a-4142-b291-83e72a63b996-kube-api-access-nhdpg\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.444403 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a06e697-989a-4142-b291-83e72a63b996-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.565279 4903 generic.go:334] "Generic (PLEG): container finished" podID="9a06e697-989a-4142-b291-83e72a63b996" containerID="da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" exitCode=0 Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.565339 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9a06e697-989a-4142-b291-83e72a63b996","Type":"ContainerDied","Data":"da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16"} Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.565631 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9a06e697-989a-4142-b291-83e72a63b996","Type":"ContainerDied","Data":"afc2b2beceb14cf4e7ea1f7c450288776b74ebc23e29ee45ee29141782346da6"} Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.565372 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.565684 4903 scope.go:117] "RemoveContainer" containerID="da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.567890 4903 generic.go:334] "Generic (PLEG): container finished" podID="ab3464ba-e769-4e18-a7ff-4c752456a9ee" containerID="7a929a3c1472352096fed7e06d804435e586d826e013870d349fbcd417bf7df1" exitCode=0 Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.567930 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab3464ba-e769-4e18-a7ff-4c752456a9ee","Type":"ContainerDied","Data":"7a929a3c1472352096fed7e06d804435e586d826e013870d349fbcd417bf7df1"} Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.613380 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.624144 4903 scope.go:117] "RemoveContainer" containerID="da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.624743 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16\": container with ID starting with da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16 not found: ID does not exist" containerID="da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.624806 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16"} err="failed to get container status \"da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16\": rpc error: code = NotFound desc = could not find container \"da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16\": container with ID starting with da8f80ed041990955ed3e9cb0b328da883aceb75c7b617d56e9b56193967ed16 not found: ID does not exist" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.625622 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.640593 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641222 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b21187c-e391-4715-be41-c2b56d925c85" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641254 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b21187c-e391-4715-be41-c2b56d925c85" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641270 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a06e697-989a-4142-b291-83e72a63b996" containerName="nova-cell1-conductor-conductor" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641278 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a06e697-989a-4142-b291-83e72a63b996" containerName="nova-cell1-conductor-conductor" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641290 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="extract-content" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641298 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="extract-content" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641324 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="extract-content" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641332 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="extract-content" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641341 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="extract-utilities" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641349 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="extract-utilities" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641380 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="extract-utilities" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641391 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="extract-utilities" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641408 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="registry-server" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641416 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="registry-server" Jan 28 18:12:51 crc kubenswrapper[4903]: E0128 18:12:51.641434 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="registry-server" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641442 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="registry-server" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641684 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a06e697-989a-4142-b291-83e72a63b996" containerName="nova-cell1-conductor-conductor" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641705 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b21187c-e391-4715-be41-c2b56d925c85" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641723 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="277bb399-4012-401f-b809-adf4d56fcfbf" containerName="registry-server" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.641737 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd96bfca-cc72-44e6-8f7d-1d468e61c6fc" containerName="registry-server" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.642597 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.646293 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.653400 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.751162 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424b2be6-5805-4678-a636-ee7b6d2e0392-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.751494 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c47v\" (UniqueName: \"kubernetes.io/projected/424b2be6-5805-4678-a636-ee7b6d2e0392-kube-api-access-2c47v\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.751721 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424b2be6-5805-4678-a636-ee7b6d2e0392-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.848643 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.853813 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424b2be6-5805-4678-a636-ee7b6d2e0392-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.853862 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c47v\" (UniqueName: \"kubernetes.io/projected/424b2be6-5805-4678-a636-ee7b6d2e0392-kube-api-access-2c47v\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.853925 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424b2be6-5805-4678-a636-ee7b6d2e0392-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.860569 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424b2be6-5805-4678-a636-ee7b6d2e0392-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.860778 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424b2be6-5805-4678-a636-ee7b6d2e0392-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.906962 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c47v\" (UniqueName: \"kubernetes.io/projected/424b2be6-5805-4678-a636-ee7b6d2e0392-kube-api-access-2c47v\") pod \"nova-cell1-conductor-0\" (UID: \"424b2be6-5805-4678-a636-ee7b6d2e0392\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.954984 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mqzt\" (UniqueName: \"kubernetes.io/projected/ab3464ba-e769-4e18-a7ff-4c752456a9ee-kube-api-access-4mqzt\") pod \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.955137 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-combined-ca-bundle\") pod \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.955258 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-config-data\") pod \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\" (UID: \"ab3464ba-e769-4e18-a7ff-4c752456a9ee\") " Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.960978 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3464ba-e769-4e18-a7ff-4c752456a9ee-kube-api-access-4mqzt" (OuterVolumeSpecName: "kube-api-access-4mqzt") pod "ab3464ba-e769-4e18-a7ff-4c752456a9ee" (UID: "ab3464ba-e769-4e18-a7ff-4c752456a9ee"). InnerVolumeSpecName "kube-api-access-4mqzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.967810 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.992406 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-config-data" (OuterVolumeSpecName: "config-data") pod "ab3464ba-e769-4e18-a7ff-4c752456a9ee" (UID: "ab3464ba-e769-4e18-a7ff-4c752456a9ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:51 crc kubenswrapper[4903]: I0128 18:12:51.993187 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab3464ba-e769-4e18-a7ff-4c752456a9ee" (UID: "ab3464ba-e769-4e18-a7ff-4c752456a9ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.058765 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mqzt\" (UniqueName: \"kubernetes.io/projected/ab3464ba-e769-4e18-a7ff-4c752456a9ee-kube-api-access-4mqzt\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.059104 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.059120 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3464ba-e769-4e18-a7ff-4c752456a9ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.438395 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a06e697-989a-4142-b291-83e72a63b996" path="/var/lib/kubelet/pods/9a06e697-989a-4142-b291-83e72a63b996/volumes" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.467314 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.578544 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"424b2be6-5805-4678-a636-ee7b6d2e0392","Type":"ContainerStarted","Data":"1d1be04dfb6f29d3b9be854b2d998a9da71ccc9260bb6cad28466874851bad47"} Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.584499 4903 generic.go:334] "Generic (PLEG): container finished" podID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerID="3d09968fc0f58cdd94e16d0b629d255e836f2b598eefceb170d29cecdebe2569" exitCode=0 Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.584575 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e","Type":"ContainerDied","Data":"3d09968fc0f58cdd94e16d0b629d255e836f2b598eefceb170d29cecdebe2569"} Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.586587 4903 generic.go:334] "Generic (PLEG): container finished" podID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerID="82fea43a61dc69dea5960abf4d7ddf92fde43e925d930d2f5160a1444485723d" exitCode=0 Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.586646 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5","Type":"ContainerDied","Data":"82fea43a61dc69dea5960abf4d7ddf92fde43e925d930d2f5160a1444485723d"} Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.603643 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ab3464ba-e769-4e18-a7ff-4c752456a9ee","Type":"ContainerDied","Data":"1511f11862de2bdb23980960168607932cba63cc8161760ac69d267032243bf9"} Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.603733 4903 scope.go:117] "RemoveContainer" containerID="7a929a3c1472352096fed7e06d804435e586d826e013870d349fbcd417bf7df1" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.603742 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.661746 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.680481 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.702836 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:12:52 crc kubenswrapper[4903]: E0128 18:12:52.703300 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3464ba-e769-4e18-a7ff-4c752456a9ee" containerName="nova-scheduler-scheduler" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.703317 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3464ba-e769-4e18-a7ff-4c752456a9ee" containerName="nova-scheduler-scheduler" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.703558 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3464ba-e769-4e18-a7ff-4c752456a9ee" containerName="nova-scheduler-scheduler" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.704271 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.710432 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.745928 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.751086 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.778477 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7khq\" (UniqueName: \"kubernetes.io/projected/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-kube-api-access-k7khq\") pod \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.778563 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-config-data\") pod \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.778619 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-logs\") pod \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.778677 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-combined-ca-bundle\") pod \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.778829 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-internal-tls-certs\") pod \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.778933 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-public-tls-certs\") pod \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\" (UID: \"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.779233 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed58bc3b-2319-44c1-8864-666e333d559b-config-data\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.779295 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed58bc3b-2319-44c1-8864-666e333d559b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.779437 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t6rs\" (UniqueName: \"kubernetes.io/projected/ed58bc3b-2319-44c1-8864-666e333d559b-kube-api-access-8t6rs\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.781656 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-logs" (OuterVolumeSpecName: "logs") pod "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" (UID: "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.800498 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pd8tk"] Jan 28 18:12:52 crc kubenswrapper[4903]: E0128 18:12:52.800838 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-log" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.800853 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-log" Jan 28 18:12:52 crc kubenswrapper[4903]: E0128 18:12:52.800895 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-api" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.800901 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-api" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.801097 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-log" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.801125 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" containerName="nova-api-api" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.802662 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.805331 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-kube-api-access-k7khq" (OuterVolumeSpecName: "kube-api-access-k7khq") pod "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" (UID: "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5"). InnerVolumeSpecName "kube-api-access-k7khq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.828556 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pd8tk"] Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.830242 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" (UID: "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.854927 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.877706 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-config-data" (OuterVolumeSpecName: "config-data") pod "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" (UID: "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.880811 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t6rs\" (UniqueName: \"kubernetes.io/projected/ed58bc3b-2319-44c1-8864-666e333d559b-kube-api-access-8t6rs\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.880924 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-utilities\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881007 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed58bc3b-2319-44c1-8864-666e333d559b-config-data\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881070 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed58bc3b-2319-44c1-8864-666e333d559b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881137 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjqw8\" (UniqueName: \"kubernetes.io/projected/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-kube-api-access-kjqw8\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881225 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-catalog-content\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881306 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7khq\" (UniqueName: \"kubernetes.io/projected/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-kube-api-access-k7khq\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881317 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881327 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.881336 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.899445 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed58bc3b-2319-44c1-8864-666e333d559b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.903334 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed58bc3b-2319-44c1-8864-666e333d559b-config-data\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.906141 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t6rs\" (UniqueName: \"kubernetes.io/projected/ed58bc3b-2319-44c1-8864-666e333d559b-kube-api-access-8t6rs\") pod \"nova-scheduler-0\" (UID: \"ed58bc3b-2319-44c1-8864-666e333d559b\") " pod="openstack/nova-scheduler-0" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.933006 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" (UID: "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.955691 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" (UID: "ecee36c3-73e5-4e3b-8eb8-c29eae84dab5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.982276 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvh4k\" (UniqueName: \"kubernetes.io/projected/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-kube-api-access-tvh4k\") pod \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.982433 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-nova-metadata-tls-certs\") pod \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.982515 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-logs\") pod \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.982565 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-config-data\") pod \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.982702 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-combined-ca-bundle\") pod \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\" (UID: \"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e\") " Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.982974 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-utilities\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.983103 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjqw8\" (UniqueName: \"kubernetes.io/projected/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-kube-api-access-kjqw8\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.983172 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-catalog-content\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.983250 4903 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.983261 4903 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.983662 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-catalog-content\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.984600 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-utilities\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.987620 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-logs" (OuterVolumeSpecName: "logs") pod "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" (UID: "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:12:52 crc kubenswrapper[4903]: I0128 18:12:52.993762 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-kube-api-access-tvh4k" (OuterVolumeSpecName: "kube-api-access-tvh4k") pod "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" (UID: "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e"). InnerVolumeSpecName "kube-api-access-tvh4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.002963 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjqw8\" (UniqueName: \"kubernetes.io/projected/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-kube-api-access-kjqw8\") pod \"redhat-operators-pd8tk\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.034060 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" (UID: "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.077908 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-config-data" (OuterVolumeSpecName: "config-data") pod "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" (UID: "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.084799 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.084828 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvh4k\" (UniqueName: \"kubernetes.io/projected/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-kube-api-access-tvh4k\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.084839 4903 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.084849 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.107784 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" (UID: "e8fd66b0-cfd3-423c-9ba7-8a6a017c239e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.124984 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.169843 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.187132 4903 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.620078 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"424b2be6-5805-4678-a636-ee7b6d2e0392","Type":"ContainerStarted","Data":"9f5a960b16b00c27159b23ff48cf87d079e53e32d6aee5106359fcb79339eac3"} Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.624679 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.629691 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8fd66b0-cfd3-423c-9ba7-8a6a017c239e","Type":"ContainerDied","Data":"14911dbe71de799cff33716dd6c3224cedae68fc8abae437e5f694edf53636af"} Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.629762 4903 scope.go:117] "RemoveContainer" containerID="3d09968fc0f58cdd94e16d0b629d255e836f2b598eefceb170d29cecdebe2569" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.629718 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.645909 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ecee36c3-73e5-4e3b-8eb8-c29eae84dab5","Type":"ContainerDied","Data":"adbe9eb04e81af29d5f433d20491f56e9618a6dfb1996b93478fae3afc0fb9e7"} Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.646006 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.655100 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.655081305 podStartE2EDuration="2.655081305s" podCreationTimestamp="2026-01-28 18:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:12:53.645291801 +0000 UTC m=+8845.921263312" watchObservedRunningTime="2026-01-28 18:12:53.655081305 +0000 UTC m=+8845.931052816" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.688789 4903 scope.go:117] "RemoveContainer" containerID="909ff26c97bfd0b3061edc9b87e9f176717635cd4a9cf3a1bdd9edf777821d6f" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.703833 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.721113 4903 scope.go:117] "RemoveContainer" containerID="82fea43a61dc69dea5960abf4d7ddf92fde43e925d930d2f5160a1444485723d" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.729593 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.759658 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.791500 4903 scope.go:117] "RemoveContainer" containerID="aba26193ab75a927a7f1998623cec92597d372c49578d8cc33d5e37ea6f0b0ce" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.799388 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.812153 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.835248 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: E0128 18:12:53.835701 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-metadata" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.835716 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-metadata" Jan 28 18:12:53 crc kubenswrapper[4903]: E0128 18:12:53.835732 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-log" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.835738 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-log" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.835932 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-metadata" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.835951 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" containerName="nova-metadata-log" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.839071 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.848017 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.849937 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.851814 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.852019 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.852474 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.856207 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.859871 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.878662 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.893938 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.927897 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pd8tk"] Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.951761 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01ef5a0e-b9ae-45ea-806c-8ac446033d60-logs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.951846 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.951916 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b1a31c-5cb8-4db8-9405-36870fe48431-logs\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.952110 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.952324 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-config-data\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.952385 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-public-tls-certs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.952427 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rffxg\" (UniqueName: \"kubernetes.io/projected/81b1a31c-5cb8-4db8-9405-36870fe48431-kube-api-access-rffxg\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.952515 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htthb\" (UniqueName: \"kubernetes.io/projected/01ef5a0e-b9ae-45ea-806c-8ac446033d60-kube-api-access-htthb\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.956640 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.956754 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:53 crc kubenswrapper[4903]: I0128 18:12:53.956901 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-config-data\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.062925 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-config-data\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.062994 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-public-tls-certs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063029 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rffxg\" (UniqueName: \"kubernetes.io/projected/81b1a31c-5cb8-4db8-9405-36870fe48431-kube-api-access-rffxg\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063080 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htthb\" (UniqueName: \"kubernetes.io/projected/01ef5a0e-b9ae-45ea-806c-8ac446033d60-kube-api-access-htthb\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063108 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063157 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063233 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-config-data\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063271 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01ef5a0e-b9ae-45ea-806c-8ac446033d60-logs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063296 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063325 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b1a31c-5cb8-4db8-9405-36870fe48431-logs\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.063418 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.066802 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01ef5a0e-b9ae-45ea-806c-8ac446033d60-logs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.068170 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.068489 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b1a31c-5cb8-4db8-9405-36870fe48431-logs\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.070937 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.070984 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.074016 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-config-data\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.074121 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-internal-tls-certs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.078276 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b1a31c-5cb8-4db8-9405-36870fe48431-config-data\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.086987 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htthb\" (UniqueName: \"kubernetes.io/projected/01ef5a0e-b9ae-45ea-806c-8ac446033d60-kube-api-access-htthb\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.087202 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01ef5a0e-b9ae-45ea-806c-8ac446033d60-public-tls-certs\") pod \"nova-api-0\" (UID: \"01ef5a0e-b9ae-45ea-806c-8ac446033d60\") " pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.089797 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rffxg\" (UniqueName: \"kubernetes.io/projected/81b1a31c-5cb8-4db8-9405-36870fe48431-kube-api-access-rffxg\") pod \"nova-metadata-0\" (UID: \"81b1a31c-5cb8-4db8-9405-36870fe48431\") " pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.256415 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.257648 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.389488 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.456231 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3464ba-e769-4e18-a7ff-4c752456a9ee" path="/var/lib/kubelet/pods/ab3464ba-e769-4e18-a7ff-4c752456a9ee/volumes" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.457281 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fd66b0-cfd3-423c-9ba7-8a6a017c239e" path="/var/lib/kubelet/pods/e8fd66b0-cfd3-423c-9ba7-8a6a017c239e/volumes" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.458943 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecee36c3-73e5-4e3b-8eb8-c29eae84dab5" path="/var/lib/kubelet/pods/ecee36c3-73e5-4e3b-8eb8-c29eae84dab5/volumes" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.474449 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-config-data\") pod \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.474557 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-combined-ca-bundle\") pod \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.474688 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnszz\" (UniqueName: \"kubernetes.io/projected/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-kube-api-access-vnszz\") pod \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\" (UID: \"ad5e0d41-5311-4d00-b9e8-69915bf46fd9\") " Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.488747 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-kube-api-access-vnszz" (OuterVolumeSpecName: "kube-api-access-vnszz") pod "ad5e0d41-5311-4d00-b9e8-69915bf46fd9" (UID: "ad5e0d41-5311-4d00-b9e8-69915bf46fd9"). InnerVolumeSpecName "kube-api-access-vnszz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.499576 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnszz\" (UniqueName: \"kubernetes.io/projected/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-kube-api-access-vnszz\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.558412 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-config-data" (OuterVolumeSpecName: "config-data") pod "ad5e0d41-5311-4d00-b9e8-69915bf46fd9" (UID: "ad5e0d41-5311-4d00-b9e8-69915bf46fd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.587699 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad5e0d41-5311-4d00-b9e8-69915bf46fd9" (UID: "ad5e0d41-5311-4d00-b9e8-69915bf46fd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.602557 4903 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.602591 4903 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad5e0d41-5311-4d00-b9e8-69915bf46fd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.683594 4903 generic.go:334] "Generic (PLEG): container finished" podID="ad5e0d41-5311-4d00-b9e8-69915bf46fd9" containerID="793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" exitCode=0 Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.683740 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.685895 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ad5e0d41-5311-4d00-b9e8-69915bf46fd9","Type":"ContainerDied","Data":"793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535"} Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.686124 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ad5e0d41-5311-4d00-b9e8-69915bf46fd9","Type":"ContainerDied","Data":"4aa7b922254f89ee4d5261b6806b3b4153b9b04c3f34d730a51a2e6703fe50a6"} Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.686141 4903 scope.go:117] "RemoveContainer" containerID="793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.721017 4903 generic.go:334] "Generic (PLEG): container finished" podID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerID="5e825b89e882b825f6c964ef706ce565a2347ee3189713c6570bbb87b62a9a41" exitCode=0 Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.721096 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd8tk" event={"ID":"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6","Type":"ContainerDied","Data":"5e825b89e882b825f6c964ef706ce565a2347ee3189713c6570bbb87b62a9a41"} Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.721121 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd8tk" event={"ID":"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6","Type":"ContainerStarted","Data":"9a86e55564c061a89934aad8a996f8fb71a3fcccab31c477832efbb22df6f755"} Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.732254 4903 scope.go:117] "RemoveContainer" containerID="793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.736514 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:12:54 crc kubenswrapper[4903]: E0128 18:12:54.740398 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535\": container with ID starting with 793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535 not found: ID does not exist" containerID="793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.740446 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535"} err="failed to get container status \"793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535\": rpc error: code = NotFound desc = could not find container \"793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535\": container with ID starting with 793c5dc46d356a10ce2b8d110e45a64ea96f8aaa794f9321a5b1c922156b8535 not found: ID does not exist" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.761839 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed58bc3b-2319-44c1-8864-666e333d559b","Type":"ContainerStarted","Data":"3856b3c61829b1b7f3315f0cb97bb8531f0b14e0fc0253845cf51dc9b4f7e9c7"} Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.761929 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ed58bc3b-2319-44c1-8864-666e333d559b","Type":"ContainerStarted","Data":"92e454ba0d6a2aa4356b8dfa0605e40540589cbcafc32071a4b86dc1006e5e8b"} Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.807068 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.817951 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.844790 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:12:54 crc kubenswrapper[4903]: E0128 18:12:54.845363 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5e0d41-5311-4d00-b9e8-69915bf46fd9" containerName="nova-cell0-conductor-conductor" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.845385 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5e0d41-5311-4d00-b9e8-69915bf46fd9" containerName="nova-cell0-conductor-conductor" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.845688 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5e0d41-5311-4d00-b9e8-69915bf46fd9" containerName="nova-cell0-conductor-conductor" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.846634 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.852283 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.852258928 podStartE2EDuration="2.852258928s" podCreationTimestamp="2026-01-28 18:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:12:54.791100735 +0000 UTC m=+8847.067072246" watchObservedRunningTime="2026-01-28 18:12:54.852258928 +0000 UTC m=+8847.128230449" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.853439 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.889953 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:12:54 crc kubenswrapper[4903]: I0128 18:12:54.905665 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:12:54 crc kubenswrapper[4903]: W0128 18:12:54.916786 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b1a31c_5cb8_4db8_9405_36870fe48431.slice/crio-0184cc19fd282ea031d36fd69f0910df65ae9c35ec06d5869533bab816ba8647 WatchSource:0}: Error finding container 0184cc19fd282ea031d36fd69f0910df65ae9c35ec06d5869533bab816ba8647: Status 404 returned error can't find the container with id 0184cc19fd282ea031d36fd69f0910df65ae9c35ec06d5869533bab816ba8647 Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.015643 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd811f86-63c5-4150-9105-30a06941f74b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.016563 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd811f86-63c5-4150-9105-30a06941f74b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.016786 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szpnb\" (UniqueName: \"kubernetes.io/projected/cd811f86-63c5-4150-9105-30a06941f74b-kube-api-access-szpnb\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.119139 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd811f86-63c5-4150-9105-30a06941f74b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.120231 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd811f86-63c5-4150-9105-30a06941f74b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.120387 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szpnb\" (UniqueName: \"kubernetes.io/projected/cd811f86-63c5-4150-9105-30a06941f74b-kube-api-access-szpnb\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.129791 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd811f86-63c5-4150-9105-30a06941f74b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.129826 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd811f86-63c5-4150-9105-30a06941f74b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.136978 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.139417 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szpnb\" (UniqueName: \"kubernetes.io/projected/cd811f86-63c5-4150-9105-30a06941f74b-kube-api-access-szpnb\") pod \"nova-cell0-conductor-0\" (UID: \"cd811f86-63c5-4150-9105-30a06941f74b\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.393482 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.766405 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81b1a31c-5cb8-4db8-9405-36870fe48431","Type":"ContainerStarted","Data":"2a7382bc4561e3439346d60eaf7e7adb0affea9e00c226477baae43a77becd9e"} Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.766982 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81b1a31c-5cb8-4db8-9405-36870fe48431","Type":"ContainerStarted","Data":"2e8c4a106edbd2e4c45aca9fd325c5e6879c99a67953d892da4273deb42cacb5"} Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.766996 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81b1a31c-5cb8-4db8-9405-36870fe48431","Type":"ContainerStarted","Data":"0184cc19fd282ea031d36fd69f0910df65ae9c35ec06d5869533bab816ba8647"} Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.770094 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01ef5a0e-b9ae-45ea-806c-8ac446033d60","Type":"ContainerStarted","Data":"78b7fa62444362f6fd30201b386476060b2055a5680af40bea3580d219549ad7"} Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.770127 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01ef5a0e-b9ae-45ea-806c-8ac446033d60","Type":"ContainerStarted","Data":"5cb15c31d24bc713a336b19f542642b6c14d8d1a20fc3796af3328112105d2c3"} Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.770140 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01ef5a0e-b9ae-45ea-806c-8ac446033d60","Type":"ContainerStarted","Data":"4eaee2ed40d5707bd5e80251cc07aa4c1d30430e0c0886a0bb3cbe38a2067002"} Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.773747 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd8tk" event={"ID":"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6","Type":"ContainerStarted","Data":"8afef252ce7f80f0a21401a83ba2970f9b47b64408e91d150c2b9dccc26e321a"} Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.814292 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.814265761 podStartE2EDuration="2.814265761s" podCreationTimestamp="2026-01-28 18:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:12:55.795305999 +0000 UTC m=+8848.071277510" watchObservedRunningTime="2026-01-28 18:12:55.814265761 +0000 UTC m=+8848.090237272" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.840127 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.840107541 podStartE2EDuration="2.840107541s" podCreationTimestamp="2026-01-28 18:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:12:55.835022803 +0000 UTC m=+8848.110994314" watchObservedRunningTime="2026-01-28 18:12:55.840107541 +0000 UTC m=+8848.116079052" Jan 28 18:12:55 crc kubenswrapper[4903]: I0128 18:12:55.930379 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:12:56 crc kubenswrapper[4903]: I0128 18:12:56.427313 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5e0d41-5311-4d00-b9e8-69915bf46fd9" path="/var/lib/kubelet/pods/ad5e0d41-5311-4d00-b9e8-69915bf46fd9/volumes" Jan 28 18:12:56 crc kubenswrapper[4903]: I0128 18:12:56.783609 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd811f86-63c5-4150-9105-30a06941f74b","Type":"ContainerStarted","Data":"3a6b96324e1f4f3eddbbf2f0a885667db9c1c11410e953b6fa5d720319ecaa12"} Jan 28 18:12:56 crc kubenswrapper[4903]: I0128 18:12:56.783661 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cd811f86-63c5-4150-9105-30a06941f74b","Type":"ContainerStarted","Data":"8670601edd78e1ce450131be586da002d022b884343398b6bb084df570869fd4"} Jan 28 18:12:56 crc kubenswrapper[4903]: I0128 18:12:56.785076 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 18:12:56 crc kubenswrapper[4903]: I0128 18:12:56.820340 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.820322146 podStartE2EDuration="2.820322146s" podCreationTimestamp="2026-01-28 18:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:12:56.808374843 +0000 UTC m=+8849.084346374" watchObservedRunningTime="2026-01-28 18:12:56.820322146 +0000 UTC m=+8849.096293657" Jan 28 18:12:57 crc kubenswrapper[4903]: I0128 18:12:57.794798 4903 generic.go:334] "Generic (PLEG): container finished" podID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerID="8afef252ce7f80f0a21401a83ba2970f9b47b64408e91d150c2b9dccc26e321a" exitCode=0 Jan 28 18:12:57 crc kubenswrapper[4903]: I0128 18:12:57.795018 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd8tk" event={"ID":"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6","Type":"ContainerDied","Data":"8afef252ce7f80f0a21401a83ba2970f9b47b64408e91d150c2b9dccc26e321a"} Jan 28 18:12:58 crc kubenswrapper[4903]: I0128 18:12:58.125934 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:12:58 crc kubenswrapper[4903]: I0128 18:12:58.421938 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:12:59 crc kubenswrapper[4903]: I0128 18:12:59.258633 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:12:59 crc kubenswrapper[4903]: I0128 18:12:59.259250 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:12:59 crc kubenswrapper[4903]: I0128 18:12:59.816562 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"c12355c7693c6c68fad13f5aa2dc926ecd0ab1089859e8129604ea5d9dca69cd"} Jan 28 18:12:59 crc kubenswrapper[4903]: I0128 18:12:59.820330 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd8tk" event={"ID":"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6","Type":"ContainerStarted","Data":"045574395f3b3ee09da5e52f58dcd536240b431b7eb20f59f133492431150830"} Jan 28 18:12:59 crc kubenswrapper[4903]: I0128 18:12:59.863138 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pd8tk" podStartSLOduration=4.138467917 podStartE2EDuration="7.863116435s" podCreationTimestamp="2026-01-28 18:12:52 +0000 UTC" firstStartedPulling="2026-01-28 18:12:54.736190239 +0000 UTC m=+8847.012161750" lastFinishedPulling="2026-01-28 18:12:58.460838757 +0000 UTC m=+8850.736810268" observedRunningTime="2026-01-28 18:12:59.859976331 +0000 UTC m=+8852.135947842" watchObservedRunningTime="2026-01-28 18:12:59.863116435 +0000 UTC m=+8852.139087936" Jan 28 18:13:02 crc kubenswrapper[4903]: I0128 18:13:02.016684 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 18:13:03 crc kubenswrapper[4903]: I0128 18:13:03.126111 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:13:03 crc kubenswrapper[4903]: I0128 18:13:03.171969 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:13:03 crc kubenswrapper[4903]: I0128 18:13:03.172043 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:13:03 crc kubenswrapper[4903]: I0128 18:13:03.176981 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:13:03 crc kubenswrapper[4903]: I0128 18:13:03.887942 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:13:04 crc kubenswrapper[4903]: I0128 18:13:04.232566 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pd8tk" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="registry-server" probeResult="failure" output=< Jan 28 18:13:04 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 18:13:04 crc kubenswrapper[4903]: > Jan 28 18:13:04 crc kubenswrapper[4903]: I0128 18:13:04.259270 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:13:04 crc kubenswrapper[4903]: I0128 18:13:04.259427 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:13:04 crc kubenswrapper[4903]: I0128 18:13:04.259495 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:13:04 crc kubenswrapper[4903]: I0128 18:13:04.259567 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:13:05 crc kubenswrapper[4903]: I0128 18:13:05.277716 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="01ef5a0e-b9ae-45ea-806c-8ac446033d60" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.190:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:13:05 crc kubenswrapper[4903]: I0128 18:13:05.277738 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="01ef5a0e-b9ae-45ea-806c-8ac446033d60" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:13:05 crc kubenswrapper[4903]: I0128 18:13:05.283713 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="81b1a31c-5cb8-4db8-9405-36870fe48431" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:13:05 crc kubenswrapper[4903]: I0128 18:13:05.283713 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="81b1a31c-5cb8-4db8-9405-36870fe48431" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:13:06 crc kubenswrapper[4903]: I0128 18:13:06.207010 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 18:13:13 crc kubenswrapper[4903]: I0128 18:13:13.221998 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:13:13 crc kubenswrapper[4903]: I0128 18:13:13.277976 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:13:13 crc kubenswrapper[4903]: I0128 18:13:13.461911 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pd8tk"] Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.264804 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.265808 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.265960 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.266564 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.272003 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.272072 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.272385 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.274861 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.967282 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.967663 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pd8tk" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="registry-server" containerID="cri-o://045574395f3b3ee09da5e52f58dcd536240b431b7eb20f59f133492431150830" gracePeriod=2 Jan 28 18:13:14 crc kubenswrapper[4903]: I0128 18:13:14.974218 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:13:15 crc kubenswrapper[4903]: I0128 18:13:15.984645 4903 generic.go:334] "Generic (PLEG): container finished" podID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerID="045574395f3b3ee09da5e52f58dcd536240b431b7eb20f59f133492431150830" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4903]: I0128 18:13:15.984660 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd8tk" event={"ID":"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6","Type":"ContainerDied","Data":"045574395f3b3ee09da5e52f58dcd536240b431b7eb20f59f133492431150830"} Jan 28 18:13:15 crc kubenswrapper[4903]: I0128 18:13:15.985025 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pd8tk" event={"ID":"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6","Type":"ContainerDied","Data":"9a86e55564c061a89934aad8a996f8fb71a3fcccab31c477832efbb22df6f755"} Jan 28 18:13:15 crc kubenswrapper[4903]: I0128 18:13:15.985040 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a86e55564c061a89934aad8a996f8fb71a3fcccab31c477832efbb22df6f755" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.087390 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.179000 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr"] Jan 28 18:13:16 crc kubenswrapper[4903]: E0128 18:13:16.179817 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="extract-utilities" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.179846 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="extract-utilities" Jan 28 18:13:16 crc kubenswrapper[4903]: E0128 18:13:16.179884 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="extract-content" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.179894 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="extract-content" Jan 28 18:13:16 crc kubenswrapper[4903]: E0128 18:13:16.179923 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="registry-server" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.179933 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="registry-server" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.180208 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" containerName="registry-server" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.181062 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.188424 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.188791 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-v4rn6" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.188972 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.189588 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.190285 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.190586 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.190804 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.194171 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-utilities\") pod \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.194253 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjqw8\" (UniqueName: \"kubernetes.io/projected/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-kube-api-access-kjqw8\") pod \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.194345 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-catalog-content\") pod \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\" (UID: \"d3d238eb-cb2d-44e5-9f52-32c7d4de80b6\") " Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.194989 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-utilities" (OuterVolumeSpecName: "utilities") pod "d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" (UID: "d3d238eb-cb2d-44e5-9f52-32c7d4de80b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.195386 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.199650 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr"] Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.217803 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-kube-api-access-kjqw8" (OuterVolumeSpecName: "kube-api-access-kjqw8") pod "d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" (UID: "d3d238eb-cb2d-44e5-9f52-32c7d4de80b6"). InnerVolumeSpecName "kube-api-access-kjqw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298020 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298144 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5z79\" (UniqueName: \"kubernetes.io/projected/fcc65f96-e957-4640-bf28-30b206b3bfc0-kube-api-access-b5z79\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298187 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298267 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298306 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298333 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298429 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298465 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298501 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.298722 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjqw8\" (UniqueName: \"kubernetes.io/projected/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-kube-api-access-kjqw8\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.336559 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" (UID: "d3d238eb-cb2d-44e5-9f52-32c7d4de80b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.403912 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404017 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5z79\" (UniqueName: \"kubernetes.io/projected/fcc65f96-e957-4640-bf28-30b206b3bfc0-kube-api-access-b5z79\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404071 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404169 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404228 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404261 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404329 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404361 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404412 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.404767 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.414012 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.416507 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.419469 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.420451 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.420632 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.425269 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.428691 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.430585 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.449033 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5z79\" (UniqueName: \"kubernetes.io/projected/fcc65f96-e957-4640-bf28-30b206b3bfc0-kube-api-access-b5z79\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:16 crc kubenswrapper[4903]: I0128 18:13:16.509174 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:13:17 crc kubenswrapper[4903]: I0128 18:13:16.999436 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pd8tk" Jan 28 18:13:17 crc kubenswrapper[4903]: I0128 18:13:17.041574 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pd8tk"] Jan 28 18:13:17 crc kubenswrapper[4903]: I0128 18:13:17.057043 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pd8tk"] Jan 28 18:13:17 crc kubenswrapper[4903]: I0128 18:13:17.086508 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr"] Jan 28 18:13:17 crc kubenswrapper[4903]: W0128 18:13:17.489636 4903 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcc65f96_e957_4640_bf28_30b206b3bfc0.slice/crio-bbca55b449275db336ba20469097e0164a769160039b2e066a50054cd7beb306 WatchSource:0}: Error finding container bbca55b449275db336ba20469097e0164a769160039b2e066a50054cd7beb306: Status 404 returned error can't find the container with id bbca55b449275db336ba20469097e0164a769160039b2e066a50054cd7beb306 Jan 28 18:13:18 crc kubenswrapper[4903]: I0128 18:13:18.009066 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" event={"ID":"fcc65f96-e957-4640-bf28-30b206b3bfc0","Type":"ContainerStarted","Data":"bbca55b449275db336ba20469097e0164a769160039b2e066a50054cd7beb306"} Jan 28 18:13:18 crc kubenswrapper[4903]: I0128 18:13:18.426455 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d238eb-cb2d-44e5-9f52-32c7d4de80b6" path="/var/lib/kubelet/pods/d3d238eb-cb2d-44e5-9f52-32c7d4de80b6/volumes" Jan 28 18:13:24 crc kubenswrapper[4903]: I0128 18:13:24.081736 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" event={"ID":"fcc65f96-e957-4640-bf28-30b206b3bfc0","Type":"ContainerStarted","Data":"0066ea3c6f23a6c94f04d393404085634a3d15ce11fd0b56a867033cf0e715a4"} Jan 28 18:13:24 crc kubenswrapper[4903]: I0128 18:13:24.118965 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" podStartSLOduration=2.78471933 podStartE2EDuration="8.11893342s" podCreationTimestamp="2026-01-28 18:13:16 +0000 UTC" firstStartedPulling="2026-01-28 18:13:17.499414584 +0000 UTC m=+8869.775386095" lastFinishedPulling="2026-01-28 18:13:22.833628674 +0000 UTC m=+8875.109600185" observedRunningTime="2026-01-28 18:13:24.104586622 +0000 UTC m=+8876.380558133" watchObservedRunningTime="2026-01-28 18:13:24.11893342 +0000 UTC m=+8876.394904931" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.155438 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm"] Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.157874 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.161958 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.166702 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm"] Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.176016 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.292063 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5w2t\" (UniqueName: \"kubernetes.io/projected/fef79223-a28e-4e42-a6cc-4999f2aa2899-kube-api-access-d5w2t\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.292145 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef79223-a28e-4e42-a6cc-4999f2aa2899-secret-volume\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.292225 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef79223-a28e-4e42-a6cc-4999f2aa2899-config-volume\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.394194 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5w2t\" (UniqueName: \"kubernetes.io/projected/fef79223-a28e-4e42-a6cc-4999f2aa2899-kube-api-access-d5w2t\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.394562 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef79223-a28e-4e42-a6cc-4999f2aa2899-secret-volume\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.394793 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef79223-a28e-4e42-a6cc-4999f2aa2899-config-volume\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.395962 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef79223-a28e-4e42-a6cc-4999f2aa2899-config-volume\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.405451 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef79223-a28e-4e42-a6cc-4999f2aa2899-secret-volume\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.435052 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5w2t\" (UniqueName: \"kubernetes.io/projected/fef79223-a28e-4e42-a6cc-4999f2aa2899-kube-api-access-d5w2t\") pod \"collect-profiles-29493735-gklsm\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.480818 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:00 crc kubenswrapper[4903]: I0128 18:15:00.998717 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm"] Jan 28 18:15:01 crc kubenswrapper[4903]: I0128 18:15:01.241344 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" event={"ID":"fef79223-a28e-4e42-a6cc-4999f2aa2899","Type":"ContainerStarted","Data":"32638857fa56da6b8a2998b549f51352b49ef0c30d26ac0cdfeec2577410f94c"} Jan 28 18:15:01 crc kubenswrapper[4903]: I0128 18:15:01.242468 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" event={"ID":"fef79223-a28e-4e42-a6cc-4999f2aa2899","Type":"ContainerStarted","Data":"725c4fd783f754f0f8d3a13470939883a0aed1693bf8d4f83d3af9d4f4625538"} Jan 28 18:15:01 crc kubenswrapper[4903]: I0128 18:15:01.269702 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" podStartSLOduration=1.269681233 podStartE2EDuration="1.269681233s" podCreationTimestamp="2026-01-28 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:01.259692413 +0000 UTC m=+8973.535663944" watchObservedRunningTime="2026-01-28 18:15:01.269681233 +0000 UTC m=+8973.545652744" Jan 28 18:15:02 crc kubenswrapper[4903]: I0128 18:15:02.262483 4903 generic.go:334] "Generic (PLEG): container finished" podID="fef79223-a28e-4e42-a6cc-4999f2aa2899" containerID="32638857fa56da6b8a2998b549f51352b49ef0c30d26ac0cdfeec2577410f94c" exitCode=0 Jan 28 18:15:02 crc kubenswrapper[4903]: I0128 18:15:02.262561 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" event={"ID":"fef79223-a28e-4e42-a6cc-4999f2aa2899","Type":"ContainerDied","Data":"32638857fa56da6b8a2998b549f51352b49ef0c30d26ac0cdfeec2577410f94c"} Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.701061 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.873299 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5w2t\" (UniqueName: \"kubernetes.io/projected/fef79223-a28e-4e42-a6cc-4999f2aa2899-kube-api-access-d5w2t\") pod \"fef79223-a28e-4e42-a6cc-4999f2aa2899\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.873441 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef79223-a28e-4e42-a6cc-4999f2aa2899-secret-volume\") pod \"fef79223-a28e-4e42-a6cc-4999f2aa2899\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.873472 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef79223-a28e-4e42-a6cc-4999f2aa2899-config-volume\") pod \"fef79223-a28e-4e42-a6cc-4999f2aa2899\" (UID: \"fef79223-a28e-4e42-a6cc-4999f2aa2899\") " Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.874399 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef79223-a28e-4e42-a6cc-4999f2aa2899-config-volume" (OuterVolumeSpecName: "config-volume") pod "fef79223-a28e-4e42-a6cc-4999f2aa2899" (UID: "fef79223-a28e-4e42-a6cc-4999f2aa2899"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.880489 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef79223-a28e-4e42-a6cc-4999f2aa2899-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fef79223-a28e-4e42-a6cc-4999f2aa2899" (UID: "fef79223-a28e-4e42-a6cc-4999f2aa2899"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.880866 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef79223-a28e-4e42-a6cc-4999f2aa2899-kube-api-access-d5w2t" (OuterVolumeSpecName: "kube-api-access-d5w2t") pod "fef79223-a28e-4e42-a6cc-4999f2aa2899" (UID: "fef79223-a28e-4e42-a6cc-4999f2aa2899"). InnerVolumeSpecName "kube-api-access-d5w2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.975515 4903 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fef79223-a28e-4e42-a6cc-4999f2aa2899-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.975651 4903 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fef79223-a28e-4e42-a6cc-4999f2aa2899-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:03 crc kubenswrapper[4903]: I0128 18:15:03.975662 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5w2t\" (UniqueName: \"kubernetes.io/projected/fef79223-a28e-4e42-a6cc-4999f2aa2899-kube-api-access-d5w2t\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:04 crc kubenswrapper[4903]: I0128 18:15:04.285364 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" event={"ID":"fef79223-a28e-4e42-a6cc-4999f2aa2899","Type":"ContainerDied","Data":"725c4fd783f754f0f8d3a13470939883a0aed1693bf8d4f83d3af9d4f4625538"} Jan 28 18:15:04 crc kubenswrapper[4903]: I0128 18:15:04.285522 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725c4fd783f754f0f8d3a13470939883a0aed1693bf8d4f83d3af9d4f4625538" Jan 28 18:15:04 crc kubenswrapper[4903]: I0128 18:15:04.285665 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-gklsm" Jan 28 18:15:04 crc kubenswrapper[4903]: I0128 18:15:04.344469 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4"] Jan 28 18:15:04 crc kubenswrapper[4903]: I0128 18:15:04.353726 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493690-7w2q4"] Jan 28 18:15:04 crc kubenswrapper[4903]: I0128 18:15:04.427040 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="287bf0f6-bb05-41a4-88c3-4389e0b19e74" path="/var/lib/kubelet/pods/287bf0f6-bb05-41a4-88c3-4389e0b19e74/volumes" Jan 28 18:15:26 crc kubenswrapper[4903]: I0128 18:15:26.613857 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:15:26 crc kubenswrapper[4903]: I0128 18:15:26.614697 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:15:56 crc kubenswrapper[4903]: I0128 18:15:56.613157 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:15:56 crc kubenswrapper[4903]: I0128 18:15:56.614629 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:16:01 crc kubenswrapper[4903]: I0128 18:16:01.434751 4903 scope.go:117] "RemoveContainer" containerID="3b45e4578babc6f44f127323f66547998ae6abe955b093b94411694d4d0bac07" Jan 28 18:16:10 crc kubenswrapper[4903]: I0128 18:16:10.956976 4903 generic.go:334] "Generic (PLEG): container finished" podID="fcc65f96-e957-4640-bf28-30b206b3bfc0" containerID="0066ea3c6f23a6c94f04d393404085634a3d15ce11fd0b56a867033cf0e715a4" exitCode=0 Jan 28 18:16:10 crc kubenswrapper[4903]: I0128 18:16:10.957078 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" event={"ID":"fcc65f96-e957-4640-bf28-30b206b3bfc0","Type":"ContainerDied","Data":"0066ea3c6f23a6c94f04d393404085634a3d15ce11fd0b56a867033cf0e715a4"} Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.440403 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530263 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-ssh-key-openstack-cell1\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530324 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cells-global-config-0\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530420 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-combined-ca-bundle\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530449 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-0\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530545 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-1\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530621 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-1\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530643 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-inventory\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.530663 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5z79\" (UniqueName: \"kubernetes.io/projected/fcc65f96-e957-4640-bf28-30b206b3bfc0-kube-api-access-b5z79\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.531106 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-0\") pod \"fcc65f96-e957-4640-bf28-30b206b3bfc0\" (UID: \"fcc65f96-e957-4640-bf28-30b206b3bfc0\") " Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.551108 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.575795 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc65f96-e957-4640-bf28-30b206b3bfc0-kube-api-access-b5z79" (OuterVolumeSpecName: "kube-api-access-b5z79") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "kube-api-access-b5z79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.595989 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.597333 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.614030 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.631862 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.631877 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.639639 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5z79\" (UniqueName: \"kubernetes.io/projected/fcc65f96-e957-4640-bf28-30b206b3bfc0-kube-api-access-b5z79\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.639684 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.639701 4903 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.639713 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.639725 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.639740 4903 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.639752 4903 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.643889 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.646727 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-inventory" (OuterVolumeSpecName: "inventory") pod "fcc65f96-e957-4640-bf28-30b206b3bfc0" (UID: "fcc65f96-e957-4640-bf28-30b206b3bfc0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.743575 4903 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.743809 4903 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcc65f96-e957-4640-bf28-30b206b3bfc0-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.978693 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" event={"ID":"fcc65f96-e957-4640-bf28-30b206b3bfc0","Type":"ContainerDied","Data":"bbca55b449275db336ba20469097e0164a769160039b2e066a50054cd7beb306"} Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.978733 4903 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbca55b449275db336ba20469097e0164a769160039b2e066a50054cd7beb306" Jan 28 18:16:12 crc kubenswrapper[4903]: I0128 18:16:12.978796 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxbrdr" Jan 28 18:16:26 crc kubenswrapper[4903]: I0128 18:16:26.613899 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:16:26 crc kubenswrapper[4903]: I0128 18:16:26.614380 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:16:26 crc kubenswrapper[4903]: I0128 18:16:26.614420 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 18:16:26 crc kubenswrapper[4903]: I0128 18:16:26.615174 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c12355c7693c6c68fad13f5aa2dc926ecd0ab1089859e8129604ea5d9dca69cd"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:16:26 crc kubenswrapper[4903]: I0128 18:16:26.615215 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://c12355c7693c6c68fad13f5aa2dc926ecd0ab1089859e8129604ea5d9dca69cd" gracePeriod=600 Jan 28 18:16:27 crc kubenswrapper[4903]: I0128 18:16:27.121684 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="c12355c7693c6c68fad13f5aa2dc926ecd0ab1089859e8129604ea5d9dca69cd" exitCode=0 Jan 28 18:16:27 crc kubenswrapper[4903]: I0128 18:16:27.122417 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"c12355c7693c6c68fad13f5aa2dc926ecd0ab1089859e8129604ea5d9dca69cd"} Jan 28 18:16:27 crc kubenswrapper[4903]: I0128 18:16:27.122482 4903 scope.go:117] "RemoveContainer" containerID="073015e1647b860e7eab6018bac5f45329772d8d05e9175cd4e0d171e5c26836" Jan 28 18:16:28 crc kubenswrapper[4903]: I0128 18:16:28.139374 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerStarted","Data":"405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b"} Jan 28 18:18:56 crc kubenswrapper[4903]: I0128 18:18:56.613636 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:18:56 crc kubenswrapper[4903]: I0128 18:18:56.614609 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:19:01 crc kubenswrapper[4903]: I0128 18:19:01.544641 4903 scope.go:117] "RemoveContainer" containerID="045574395f3b3ee09da5e52f58dcd536240b431b7eb20f59f133492431150830" Jan 28 18:19:01 crc kubenswrapper[4903]: I0128 18:19:01.571485 4903 scope.go:117] "RemoveContainer" containerID="8afef252ce7f80f0a21401a83ba2970f9b47b64408e91d150c2b9dccc26e321a" Jan 28 18:19:01 crc kubenswrapper[4903]: I0128 18:19:01.593984 4903 scope.go:117] "RemoveContainer" containerID="5e825b89e882b825f6c964ef706ce565a2347ee3189713c6570bbb87b62a9a41" Jan 28 18:19:26 crc kubenswrapper[4903]: I0128 18:19:26.048845 4903 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-769z8" podUID="a59793c9-95fe-448d-999b-48f9e9f868c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:19:26 crc kubenswrapper[4903]: I0128 18:19:26.048859 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-769z8" podUID="a59793c9-95fe-448d-999b-48f9e9f868c4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:19:26 crc kubenswrapper[4903]: I0128 18:19:26.613686 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:19:26 crc kubenswrapper[4903]: I0128 18:19:26.613762 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:19:56 crc kubenswrapper[4903]: I0128 18:19:56.614957 4903 patch_prober.go:28] interesting pod/machine-config-daemon-plxzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:19:56 crc kubenswrapper[4903]: I0128 18:19:56.616068 4903 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:19:56 crc kubenswrapper[4903]: I0128 18:19:56.616203 4903 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" Jan 28 18:19:56 crc kubenswrapper[4903]: I0128 18:19:56.618393 4903 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b"} pod="openshift-machine-config-operator/machine-config-daemon-plxzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:19:56 crc kubenswrapper[4903]: I0128 18:19:56.618723 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" containerName="machine-config-daemon" containerID="cri-o://405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" gracePeriod=600 Jan 28 18:19:56 crc kubenswrapper[4903]: E0128 18:19:56.749690 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:19:57 crc kubenswrapper[4903]: I0128 18:19:57.231630 4903 generic.go:334] "Generic (PLEG): container finished" podID="dacf7a8c-d645-4596-9266-092101fc3613" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" exitCode=0 Jan 28 18:19:57 crc kubenswrapper[4903]: I0128 18:19:57.231675 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" event={"ID":"dacf7a8c-d645-4596-9266-092101fc3613","Type":"ContainerDied","Data":"405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b"} Jan 28 18:19:57 crc kubenswrapper[4903]: I0128 18:19:57.231718 4903 scope.go:117] "RemoveContainer" containerID="c12355c7693c6c68fad13f5aa2dc926ecd0ab1089859e8129604ea5d9dca69cd" Jan 28 18:19:57 crc kubenswrapper[4903]: I0128 18:19:57.232834 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:19:57 crc kubenswrapper[4903]: E0128 18:19:57.233219 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:06.991449 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kmmfm"] Jan 28 18:20:07 crc kubenswrapper[4903]: E0128 18:20:07.015676 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc65f96-e957-4640-bf28-30b206b3bfc0" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.015726 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc65f96-e957-4640-bf28-30b206b3bfc0" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 28 18:20:07 crc kubenswrapper[4903]: E0128 18:20:07.015769 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef79223-a28e-4e42-a6cc-4999f2aa2899" containerName="collect-profiles" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.015779 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef79223-a28e-4e42-a6cc-4999f2aa2899" containerName="collect-profiles" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.016177 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef79223-a28e-4e42-a6cc-4999f2aa2899" containerName="collect-profiles" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.016235 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc65f96-e957-4640-bf28-30b206b3bfc0" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.018889 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmmfm"] Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.019675 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.020943 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-catalog-content\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.021125 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm4c8\" (UniqueName: \"kubernetes.io/projected/9cd4b83e-1568-4284-b26c-790224b2be46-kube-api-access-mm4c8\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.021178 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-utilities\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.123354 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm4c8\" (UniqueName: \"kubernetes.io/projected/9cd4b83e-1568-4284-b26c-790224b2be46-kube-api-access-mm4c8\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.123402 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-utilities\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.123580 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-catalog-content\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.124078 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-utilities\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.124327 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-catalog-content\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.146760 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm4c8\" (UniqueName: \"kubernetes.io/projected/9cd4b83e-1568-4284-b26c-790224b2be46-kube-api-access-mm4c8\") pod \"redhat-marketplace-kmmfm\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.356063 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:07 crc kubenswrapper[4903]: I0128 18:20:07.936025 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmmfm"] Jan 28 18:20:08 crc kubenswrapper[4903]: I0128 18:20:08.356969 4903 generic.go:334] "Generic (PLEG): container finished" podID="9cd4b83e-1568-4284-b26c-790224b2be46" containerID="7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f" exitCode=0 Jan 28 18:20:08 crc kubenswrapper[4903]: I0128 18:20:08.357018 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmmfm" event={"ID":"9cd4b83e-1568-4284-b26c-790224b2be46","Type":"ContainerDied","Data":"7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f"} Jan 28 18:20:08 crc kubenswrapper[4903]: I0128 18:20:08.357045 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmmfm" event={"ID":"9cd4b83e-1568-4284-b26c-790224b2be46","Type":"ContainerStarted","Data":"2b35849b30ee79585e157e8bba23ce1b5788e83297560aab51a966a37ab3a45c"} Jan 28 18:20:08 crc kubenswrapper[4903]: I0128 18:20:08.365240 4903 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:20:09 crc kubenswrapper[4903]: I0128 18:20:09.413405 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:20:09 crc kubenswrapper[4903]: E0128 18:20:09.414051 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:20:11 crc kubenswrapper[4903]: I0128 18:20:11.384157 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmmfm" event={"ID":"9cd4b83e-1568-4284-b26c-790224b2be46","Type":"ContainerStarted","Data":"cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94"} Jan 28 18:20:12 crc kubenswrapper[4903]: I0128 18:20:12.393866 4903 generic.go:334] "Generic (PLEG): container finished" podID="9cd4b83e-1568-4284-b26c-790224b2be46" containerID="cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94" exitCode=0 Jan 28 18:20:12 crc kubenswrapper[4903]: I0128 18:20:12.393919 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmmfm" event={"ID":"9cd4b83e-1568-4284-b26c-790224b2be46","Type":"ContainerDied","Data":"cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94"} Jan 28 18:20:14 crc kubenswrapper[4903]: I0128 18:20:14.426653 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmmfm" event={"ID":"9cd4b83e-1568-4284-b26c-790224b2be46","Type":"ContainerStarted","Data":"b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5"} Jan 28 18:20:14 crc kubenswrapper[4903]: I0128 18:20:14.453459 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kmmfm" podStartSLOduration=3.148362954 podStartE2EDuration="8.453433797s" podCreationTimestamp="2026-01-28 18:20:06 +0000 UTC" firstStartedPulling="2026-01-28 18:20:08.364922429 +0000 UTC m=+9280.640893950" lastFinishedPulling="2026-01-28 18:20:13.669993272 +0000 UTC m=+9285.945964793" observedRunningTime="2026-01-28 18:20:14.431518495 +0000 UTC m=+9286.707490016" watchObservedRunningTime="2026-01-28 18:20:14.453433797 +0000 UTC m=+9286.729405328" Jan 28 18:20:17 crc kubenswrapper[4903]: I0128 18:20:17.356724 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:17 crc kubenswrapper[4903]: I0128 18:20:17.357095 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:17 crc kubenswrapper[4903]: I0128 18:20:17.425850 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:21 crc kubenswrapper[4903]: I0128 18:20:21.414246 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:20:21 crc kubenswrapper[4903]: E0128 18:20:21.414936 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:20:27 crc kubenswrapper[4903]: I0128 18:20:27.831407 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:27 crc kubenswrapper[4903]: I0128 18:20:27.886472 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmmfm"] Jan 28 18:20:28 crc kubenswrapper[4903]: I0128 18:20:28.566471 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kmmfm" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="registry-server" containerID="cri-o://b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5" gracePeriod=2 Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.071602 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.219788 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm4c8\" (UniqueName: \"kubernetes.io/projected/9cd4b83e-1568-4284-b26c-790224b2be46-kube-api-access-mm4c8\") pod \"9cd4b83e-1568-4284-b26c-790224b2be46\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.219973 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-utilities\") pod \"9cd4b83e-1568-4284-b26c-790224b2be46\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.220154 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-catalog-content\") pod \"9cd4b83e-1568-4284-b26c-790224b2be46\" (UID: \"9cd4b83e-1568-4284-b26c-790224b2be46\") " Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.220829 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-utilities" (OuterVolumeSpecName: "utilities") pod "9cd4b83e-1568-4284-b26c-790224b2be46" (UID: "9cd4b83e-1568-4284-b26c-790224b2be46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.226316 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd4b83e-1568-4284-b26c-790224b2be46-kube-api-access-mm4c8" (OuterVolumeSpecName: "kube-api-access-mm4c8") pod "9cd4b83e-1568-4284-b26c-790224b2be46" (UID: "9cd4b83e-1568-4284-b26c-790224b2be46"). InnerVolumeSpecName "kube-api-access-mm4c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.248937 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cd4b83e-1568-4284-b26c-790224b2be46" (UID: "9cd4b83e-1568-4284-b26c-790224b2be46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.323279 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.323317 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd4b83e-1568-4284-b26c-790224b2be46-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.323331 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm4c8\" (UniqueName: \"kubernetes.io/projected/9cd4b83e-1568-4284-b26c-790224b2be46-kube-api-access-mm4c8\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.579784 4903 generic.go:334] "Generic (PLEG): container finished" podID="9cd4b83e-1568-4284-b26c-790224b2be46" containerID="b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5" exitCode=0 Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.579834 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmmfm" event={"ID":"9cd4b83e-1568-4284-b26c-790224b2be46","Type":"ContainerDied","Data":"b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5"} Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.579862 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmmfm" event={"ID":"9cd4b83e-1568-4284-b26c-790224b2be46","Type":"ContainerDied","Data":"2b35849b30ee79585e157e8bba23ce1b5788e83297560aab51a966a37ab3a45c"} Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.579880 4903 scope.go:117] "RemoveContainer" containerID="b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.579882 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmmfm" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.615845 4903 scope.go:117] "RemoveContainer" containerID="cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.649088 4903 scope.go:117] "RemoveContainer" containerID="7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.649290 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmmfm"] Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.662699 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmmfm"] Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.694615 4903 scope.go:117] "RemoveContainer" containerID="b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5" Jan 28 18:20:29 crc kubenswrapper[4903]: E0128 18:20:29.694938 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5\": container with ID starting with b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5 not found: ID does not exist" containerID="b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.694963 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5"} err="failed to get container status \"b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5\": rpc error: code = NotFound desc = could not find container \"b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5\": container with ID starting with b12c58fa686426920203a8a42df11dfe9a8290e5ecee8216c13eb6da48724ac5 not found: ID does not exist" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.694982 4903 scope.go:117] "RemoveContainer" containerID="cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94" Jan 28 18:20:29 crc kubenswrapper[4903]: E0128 18:20:29.695226 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94\": container with ID starting with cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94 not found: ID does not exist" containerID="cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.695242 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94"} err="failed to get container status \"cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94\": rpc error: code = NotFound desc = could not find container \"cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94\": container with ID starting with cb316b9a4eb91d5616690e51339dec592c51552ce4e1f672b4a3abdb257bac94 not found: ID does not exist" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.695255 4903 scope.go:117] "RemoveContainer" containerID="7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f" Jan 28 18:20:29 crc kubenswrapper[4903]: E0128 18:20:29.695423 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f\": container with ID starting with 7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f not found: ID does not exist" containerID="7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f" Jan 28 18:20:29 crc kubenswrapper[4903]: I0128 18:20:29.695441 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f"} err="failed to get container status \"7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f\": rpc error: code = NotFound desc = could not find container \"7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f\": container with ID starting with 7f201541e9a82bed7dd2eae21380a4b069f5796265c12b969287b08f68d13c6f not found: ID does not exist" Jan 28 18:20:30 crc kubenswrapper[4903]: I0128 18:20:30.430166 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" path="/var/lib/kubelet/pods/9cd4b83e-1568-4284-b26c-790224b2be46/volumes" Jan 28 18:20:35 crc kubenswrapper[4903]: I0128 18:20:35.413462 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:20:35 crc kubenswrapper[4903]: E0128 18:20:35.414433 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:20:48 crc kubenswrapper[4903]: I0128 18:20:48.419980 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:20:48 crc kubenswrapper[4903]: E0128 18:20:48.420927 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.358132 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g67p9/must-gather-2w952"] Jan 28 18:20:51 crc kubenswrapper[4903]: E0128 18:20:51.359066 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="extract-utilities" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.359078 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="extract-utilities" Jan 28 18:20:51 crc kubenswrapper[4903]: E0128 18:20:51.359094 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="registry-server" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.359100 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="registry-server" Jan 28 18:20:51 crc kubenswrapper[4903]: E0128 18:20:51.359122 4903 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="extract-content" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.359130 4903 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="extract-content" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.359322 4903 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd4b83e-1568-4284-b26c-790224b2be46" containerName="registry-server" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.360449 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.363812 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g67p9"/"openshift-service-ca.crt" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.368316 4903 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g67p9"/"kube-root-ca.crt" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.369253 4903 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-g67p9"/"default-dockercfg-hcj4x" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.369424 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g67p9/must-gather-2w952"] Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.441779 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4b43667c-57c2-4477-a08d-42e7c24c073a-must-gather-output\") pod \"must-gather-2w952\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.441995 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nnx\" (UniqueName: \"kubernetes.io/projected/4b43667c-57c2-4477-a08d-42e7c24c073a-kube-api-access-r7nnx\") pod \"must-gather-2w952\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.543847 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4b43667c-57c2-4477-a08d-42e7c24c073a-must-gather-output\") pod \"must-gather-2w952\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.544037 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7nnx\" (UniqueName: \"kubernetes.io/projected/4b43667c-57c2-4477-a08d-42e7c24c073a-kube-api-access-r7nnx\") pod \"must-gather-2w952\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.544996 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4b43667c-57c2-4477-a08d-42e7c24c073a-must-gather-output\") pod \"must-gather-2w952\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.579595 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7nnx\" (UniqueName: \"kubernetes.io/projected/4b43667c-57c2-4477-a08d-42e7c24c073a-kube-api-access-r7nnx\") pod \"must-gather-2w952\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:51 crc kubenswrapper[4903]: I0128 18:20:51.681291 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:20:52 crc kubenswrapper[4903]: I0128 18:20:52.243086 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g67p9/must-gather-2w952"] Jan 28 18:20:52 crc kubenswrapper[4903]: I0128 18:20:52.861585 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g67p9/must-gather-2w952" event={"ID":"4b43667c-57c2-4477-a08d-42e7c24c073a","Type":"ContainerStarted","Data":"6900740f5f926419eaab994513e9de8a08265c2857e3eca7a213342a99006199"} Jan 28 18:21:01 crc kubenswrapper[4903]: I0128 18:21:01.414832 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:21:01 crc kubenswrapper[4903]: E0128 18:21:01.416824 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:21:05 crc kubenswrapper[4903]: E0128 18:21:05.077458 4903 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Jan 28 18:21:05 crc kubenswrapper[4903]: E0128 18:21:05.078150 4903 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 18:21:05 crc kubenswrapper[4903]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Jan 28 18:21:05 crc kubenswrapper[4903]: HAVE_SESSION_TOOLS=true Jan 28 18:21:05 crc kubenswrapper[4903]: else Jan 28 18:21:05 crc kubenswrapper[4903]: HAVE_SESSION_TOOLS=false Jan 28 18:21:05 crc kubenswrapper[4903]: fi Jan 28 18:21:05 crc kubenswrapper[4903]: Jan 28 18:21:05 crc kubenswrapper[4903]: Jan 28 18:21:05 crc kubenswrapper[4903]: echo "[disk usage checker] Started" Jan 28 18:21:05 crc kubenswrapper[4903]: target_dir="/must-gather" Jan 28 18:21:05 crc kubenswrapper[4903]: usage_percentage_limit="80" Jan 28 18:21:05 crc kubenswrapper[4903]: while true; do Jan 28 18:21:05 crc kubenswrapper[4903]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Jan 28 18:21:05 crc kubenswrapper[4903]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Jan 28 18:21:05 crc kubenswrapper[4903]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Jan 28 18:21:05 crc kubenswrapper[4903]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Jan 28 18:21:05 crc kubenswrapper[4903]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 28 18:21:05 crc kubenswrapper[4903]: ps -o sess --no-headers | sort -u | while read sid; do Jan 28 18:21:05 crc kubenswrapper[4903]: [[ "$sid" -eq "${$}" ]] && continue Jan 28 18:21:05 crc kubenswrapper[4903]: pkill --signal SIGKILL --session "$sid" Jan 28 18:21:05 crc kubenswrapper[4903]: done Jan 28 18:21:05 crc kubenswrapper[4903]: else Jan 28 18:21:05 crc kubenswrapper[4903]: kill 0 Jan 28 18:21:05 crc kubenswrapper[4903]: fi Jan 28 18:21:05 crc kubenswrapper[4903]: exit 1 Jan 28 18:21:05 crc kubenswrapper[4903]: fi Jan 28 18:21:05 crc kubenswrapper[4903]: sleep 5 Jan 28 18:21:05 crc kubenswrapper[4903]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 28 18:21:05 crc kubenswrapper[4903]: setsid -w bash <<-MUSTGATHER_EOF Jan 28 18:21:05 crc kubenswrapper[4903]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 28 18:21:05 crc kubenswrapper[4903]: MUSTGATHER_EOF Jan 28 18:21:05 crc kubenswrapper[4903]: else Jan 28 18:21:05 crc kubenswrapper[4903]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 28 18:21:05 crc kubenswrapper[4903]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7nnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-2w952_openshift-must-gather-g67p9(4b43667c-57c2-4477-a08d-42e7c24c073a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 28 18:21:05 crc kubenswrapper[4903]: > logger="UnhandledError" Jan 28 18:21:05 crc kubenswrapper[4903]: E0128 18:21:05.085818 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-g67p9/must-gather-2w952" podUID="4b43667c-57c2-4477-a08d-42e7c24c073a" Jan 28 18:21:06 crc kubenswrapper[4903]: E0128 18:21:06.009012 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-g67p9/must-gather-2w952" podUID="4b43667c-57c2-4477-a08d-42e7c24c073a" Jan 28 18:21:14 crc kubenswrapper[4903]: I0128 18:21:14.414652 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:21:14 crc kubenswrapper[4903]: E0128 18:21:14.415489 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:21:16 crc kubenswrapper[4903]: I0128 18:21:16.924382 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g67p9/must-gather-2w952"] Jan 28 18:21:16 crc kubenswrapper[4903]: I0128 18:21:16.935744 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g67p9/must-gather-2w952"] Jan 28 18:21:17 crc kubenswrapper[4903]: I0128 18:21:17.310757 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:21:17 crc kubenswrapper[4903]: I0128 18:21:17.398032 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4b43667c-57c2-4477-a08d-42e7c24c073a-must-gather-output\") pod \"4b43667c-57c2-4477-a08d-42e7c24c073a\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " Jan 28 18:21:17 crc kubenswrapper[4903]: I0128 18:21:17.398186 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7nnx\" (UniqueName: \"kubernetes.io/projected/4b43667c-57c2-4477-a08d-42e7c24c073a-kube-api-access-r7nnx\") pod \"4b43667c-57c2-4477-a08d-42e7c24c073a\" (UID: \"4b43667c-57c2-4477-a08d-42e7c24c073a\") " Jan 28 18:21:17 crc kubenswrapper[4903]: I0128 18:21:17.398493 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b43667c-57c2-4477-a08d-42e7c24c073a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4b43667c-57c2-4477-a08d-42e7c24c073a" (UID: "4b43667c-57c2-4477-a08d-42e7c24c073a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:21:17 crc kubenswrapper[4903]: I0128 18:21:17.399082 4903 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4b43667c-57c2-4477-a08d-42e7c24c073a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 18:21:17 crc kubenswrapper[4903]: I0128 18:21:17.404866 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b43667c-57c2-4477-a08d-42e7c24c073a-kube-api-access-r7nnx" (OuterVolumeSpecName: "kube-api-access-r7nnx") pod "4b43667c-57c2-4477-a08d-42e7c24c073a" (UID: "4b43667c-57c2-4477-a08d-42e7c24c073a"). InnerVolumeSpecName "kube-api-access-r7nnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:21:17 crc kubenswrapper[4903]: I0128 18:21:17.502340 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7nnx\" (UniqueName: \"kubernetes.io/projected/4b43667c-57c2-4477-a08d-42e7c24c073a-kube-api-access-r7nnx\") on node \"crc\" DevicePath \"\"" Jan 28 18:21:18 crc kubenswrapper[4903]: I0128 18:21:18.124065 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g67p9/must-gather-2w952" Jan 28 18:21:18 crc kubenswrapper[4903]: I0128 18:21:18.435471 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b43667c-57c2-4477-a08d-42e7c24c073a" path="/var/lib/kubelet/pods/4b43667c-57c2-4477-a08d-42e7c24c073a/volumes" Jan 28 18:21:28 crc kubenswrapper[4903]: I0128 18:21:28.423986 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:21:28 crc kubenswrapper[4903]: E0128 18:21:28.424982 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:21:41 crc kubenswrapper[4903]: I0128 18:21:41.414137 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:21:41 crc kubenswrapper[4903]: E0128 18:21:41.414906 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:21:56 crc kubenswrapper[4903]: I0128 18:21:56.413769 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:21:56 crc kubenswrapper[4903]: E0128 18:21:56.414573 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:22:10 crc kubenswrapper[4903]: I0128 18:22:10.416358 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:22:10 crc kubenswrapper[4903]: E0128 18:22:10.417245 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.091730 4903 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q8lcn"] Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.094868 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.105916 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8lcn"] Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.136590 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-catalog-content\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.136645 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8cq\" (UniqueName: \"kubernetes.io/projected/482975f4-d240-4961-b1b8-c95b83a1b106-kube-api-access-mk8cq\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.136849 4903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-utilities\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.238834 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-catalog-content\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.238881 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk8cq\" (UniqueName: \"kubernetes.io/projected/482975f4-d240-4961-b1b8-c95b83a1b106-kube-api-access-mk8cq\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.239054 4903 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-utilities\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.239485 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-utilities\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.239511 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-catalog-content\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.259685 4903 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk8cq\" (UniqueName: \"kubernetes.io/projected/482975f4-d240-4961-b1b8-c95b83a1b106-kube-api-access-mk8cq\") pod \"community-operators-q8lcn\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.414218 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:22:25 crc kubenswrapper[4903]: E0128 18:22:25.414480 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.416368 4903 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:25 crc kubenswrapper[4903]: I0128 18:22:25.995572 4903 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8lcn"] Jan 28 18:22:26 crc kubenswrapper[4903]: I0128 18:22:26.950041 4903 generic.go:334] "Generic (PLEG): container finished" podID="482975f4-d240-4961-b1b8-c95b83a1b106" containerID="600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439" exitCode=0 Jan 28 18:22:26 crc kubenswrapper[4903]: I0128 18:22:26.950106 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8lcn" event={"ID":"482975f4-d240-4961-b1b8-c95b83a1b106","Type":"ContainerDied","Data":"600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439"} Jan 28 18:22:26 crc kubenswrapper[4903]: I0128 18:22:26.950433 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8lcn" event={"ID":"482975f4-d240-4961-b1b8-c95b83a1b106","Type":"ContainerStarted","Data":"41716535a063f8ea79fb91b4d890a99c4cdd6cc5b8328f0b04f7aa7a3046a399"} Jan 28 18:22:27 crc kubenswrapper[4903]: I0128 18:22:27.963390 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8lcn" event={"ID":"482975f4-d240-4961-b1b8-c95b83a1b106","Type":"ContainerStarted","Data":"dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc"} Jan 28 18:22:29 crc kubenswrapper[4903]: I0128 18:22:29.982926 4903 generic.go:334] "Generic (PLEG): container finished" podID="482975f4-d240-4961-b1b8-c95b83a1b106" containerID="dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc" exitCode=0 Jan 28 18:22:29 crc kubenswrapper[4903]: I0128 18:22:29.983003 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8lcn" event={"ID":"482975f4-d240-4961-b1b8-c95b83a1b106","Type":"ContainerDied","Data":"dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc"} Jan 28 18:22:30 crc kubenswrapper[4903]: I0128 18:22:30.996523 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8lcn" event={"ID":"482975f4-d240-4961-b1b8-c95b83a1b106","Type":"ContainerStarted","Data":"065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c"} Jan 28 18:22:31 crc kubenswrapper[4903]: I0128 18:22:31.023993 4903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q8lcn" podStartSLOduration=2.567465801 podStartE2EDuration="6.02397129s" podCreationTimestamp="2026-01-28 18:22:25 +0000 UTC" firstStartedPulling="2026-01-28 18:22:26.951971075 +0000 UTC m=+9419.227942586" lastFinishedPulling="2026-01-28 18:22:30.408476544 +0000 UTC m=+9422.684448075" observedRunningTime="2026-01-28 18:22:31.014061138 +0000 UTC m=+9423.290032659" watchObservedRunningTime="2026-01-28 18:22:31.02397129 +0000 UTC m=+9423.299942801" Jan 28 18:22:35 crc kubenswrapper[4903]: I0128 18:22:35.417361 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:35 crc kubenswrapper[4903]: I0128 18:22:35.417880 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:37 crc kubenswrapper[4903]: I0128 18:22:37.040851 4903 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q8lcn" podUID="482975f4-d240-4961-b1b8-c95b83a1b106" containerName="registry-server" probeResult="failure" output=< Jan 28 18:22:37 crc kubenswrapper[4903]: timeout: failed to connect service ":50051" within 1s Jan 28 18:22:37 crc kubenswrapper[4903]: > Jan 28 18:22:38 crc kubenswrapper[4903]: I0128 18:22:38.428352 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:22:38 crc kubenswrapper[4903]: E0128 18:22:38.428896 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:22:45 crc kubenswrapper[4903]: I0128 18:22:45.471601 4903 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:45 crc kubenswrapper[4903]: I0128 18:22:45.548961 4903 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:45 crc kubenswrapper[4903]: I0128 18:22:45.719623 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8lcn"] Jan 28 18:22:47 crc kubenswrapper[4903]: I0128 18:22:47.174728 4903 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q8lcn" podUID="482975f4-d240-4961-b1b8-c95b83a1b106" containerName="registry-server" containerID="cri-o://065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c" gracePeriod=2 Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.095934 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.145913 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-catalog-content\") pod \"482975f4-d240-4961-b1b8-c95b83a1b106\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.146022 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk8cq\" (UniqueName: \"kubernetes.io/projected/482975f4-d240-4961-b1b8-c95b83a1b106-kube-api-access-mk8cq\") pod \"482975f4-d240-4961-b1b8-c95b83a1b106\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.146083 4903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-utilities\") pod \"482975f4-d240-4961-b1b8-c95b83a1b106\" (UID: \"482975f4-d240-4961-b1b8-c95b83a1b106\") " Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.147378 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-utilities" (OuterVolumeSpecName: "utilities") pod "482975f4-d240-4961-b1b8-c95b83a1b106" (UID: "482975f4-d240-4961-b1b8-c95b83a1b106"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.155740 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482975f4-d240-4961-b1b8-c95b83a1b106-kube-api-access-mk8cq" (OuterVolumeSpecName: "kube-api-access-mk8cq") pod "482975f4-d240-4961-b1b8-c95b83a1b106" (UID: "482975f4-d240-4961-b1b8-c95b83a1b106"). InnerVolumeSpecName "kube-api-access-mk8cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.184939 4903 generic.go:334] "Generic (PLEG): container finished" podID="482975f4-d240-4961-b1b8-c95b83a1b106" containerID="065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c" exitCode=0 Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.184989 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8lcn" event={"ID":"482975f4-d240-4961-b1b8-c95b83a1b106","Type":"ContainerDied","Data":"065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c"} Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.185023 4903 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8lcn" event={"ID":"482975f4-d240-4961-b1b8-c95b83a1b106","Type":"ContainerDied","Data":"41716535a063f8ea79fb91b4d890a99c4cdd6cc5b8328f0b04f7aa7a3046a399"} Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.185044 4903 scope.go:117] "RemoveContainer" containerID="065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.185209 4903 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8lcn" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.206714 4903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "482975f4-d240-4961-b1b8-c95b83a1b106" (UID: "482975f4-d240-4961-b1b8-c95b83a1b106"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.224785 4903 scope.go:117] "RemoveContainer" containerID="dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.246064 4903 scope.go:117] "RemoveContainer" containerID="600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.248047 4903 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.248087 4903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk8cq\" (UniqueName: \"kubernetes.io/projected/482975f4-d240-4961-b1b8-c95b83a1b106-kube-api-access-mk8cq\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.248098 4903 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/482975f4-d240-4961-b1b8-c95b83a1b106-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.291382 4903 scope.go:117] "RemoveContainer" containerID="065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c" Jan 28 18:22:48 crc kubenswrapper[4903]: E0128 18:22:48.291861 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c\": container with ID starting with 065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c not found: ID does not exist" containerID="065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.291896 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c"} err="failed to get container status \"065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c\": rpc error: code = NotFound desc = could not find container \"065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c\": container with ID starting with 065290ec71de4b66b013eb6edb535b6b59bebcc123d0757fa280ef9627e8cf5c not found: ID does not exist" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.291925 4903 scope.go:117] "RemoveContainer" containerID="dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc" Jan 28 18:22:48 crc kubenswrapper[4903]: E0128 18:22:48.292285 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc\": container with ID starting with dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc not found: ID does not exist" containerID="dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.292317 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc"} err="failed to get container status \"dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc\": rpc error: code = NotFound desc = could not find container \"dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc\": container with ID starting with dfcd4b7fa9a14bb18a2c84bca368b90968dbcce54e03a457f9a7aeb2a2dd8cbc not found: ID does not exist" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.292333 4903 scope.go:117] "RemoveContainer" containerID="600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439" Jan 28 18:22:48 crc kubenswrapper[4903]: E0128 18:22:48.292703 4903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439\": container with ID starting with 600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439 not found: ID does not exist" containerID="600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.292731 4903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439"} err="failed to get container status \"600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439\": rpc error: code = NotFound desc = could not find container \"600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439\": container with ID starting with 600245d0dfe9868af6f8fd7f73748a07131c141500e5979085631188d2b5e439 not found: ID does not exist" Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.513268 4903 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8lcn"] Jan 28 18:22:48 crc kubenswrapper[4903]: I0128 18:22:48.524026 4903 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q8lcn"] Jan 28 18:22:50 crc kubenswrapper[4903]: I0128 18:22:50.416855 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:22:50 crc kubenswrapper[4903]: E0128 18:22:50.417825 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613" Jan 28 18:22:50 crc kubenswrapper[4903]: I0128 18:22:50.428443 4903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482975f4-d240-4961-b1b8-c95b83a1b106" path="/var/lib/kubelet/pods/482975f4-d240-4961-b1b8-c95b83a1b106/volumes" Jan 28 18:23:01 crc kubenswrapper[4903]: I0128 18:23:01.414476 4903 scope.go:117] "RemoveContainer" containerID="405ac17dd3c91af1be8b65d5bc70b3938dc188b9e965c23b3da490519cbcc27b" Jan 28 18:23:01 crc kubenswrapper[4903]: E0128 18:23:01.415638 4903 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-plxzk_openshift-machine-config-operator(dacf7a8c-d645-4596-9266-092101fc3613)\"" pod="openshift-machine-config-operator/machine-config-daemon-plxzk" podUID="dacf7a8c-d645-4596-9266-092101fc3613"